Editor Note: Here’s a new technique for helping attorneys get additional pretrial feedback on their case.

Trial consultants, like the attorneys they serve, often find themselves being asked to provide high-level service and aggressive cost-containment at the same time. One common request from clients focused upon managing expenses is for the consultants to forgo case specific jury research and rely upon their consulting experience with similar cases or similar venues. Interestingly, this pressure can increase as the perception of the consultant’s level of experience increases.

The problem the consultant sees, of course, is that no two cases are really the same. Wise consultants are rarely fooled into generalizing based upon the experience and perspective of just one person, whether it is himself or herself or anyone else. Whenever possible, most consultants will seek to ground their advice and input to trial counsel in the wellspring of wisdom for us all: the viewpoints of surrogate jurors with regard to this case with these precise facts brought at this time in this venue.

Consultants on either side of the bar learned years ago from asbestos and tobacco cases and other mass torts that similar facts and similar venues can nevertheless produce quite dissimilar results both with surrogate and actual jurors. The case tried last month that looks a lot like the case to be tried next month could end up looking surprisingly different on the verdict form. Much of the time, it is little things: a variation in the injury pattern with the plaintiff, a new expert, a closing argument with a new emphasis. Small contrasts in input can produce big contrasts in outcome. But, which differences? What will matter to the fact finders? There is only one reliable way to find out. Someone has to ask. (Part of the cause of this variance, of course, is chance-induced differences in the composition of juries from one case to the next. This article puts that question aside for the moment, focusing instead upon the differences between cases as opposed to the differences between juries.)

One of the devices consultants can utilize is what has been termed the feedback group. The feedback group is the smallest-scale device available to consultants for getting input from surrogate jurors on issues in a case. It is designed to make the consultant smarter, to arm the consultant with the increased confidence that can only come from discussing the case or elements of the case with dispassionate jury-eligible citizens. What it is NOT is research that aspires to be predictive or empirically evaluative. It is not research at all, in the conventional sense. It is instead simply a source of qualitative data, stimulating the consultant’s thought process, and perhaps generating new ideas. It is information only, but it is particularly valuable information because its source is the minds and hearts of surrogate jurors.

It may be useful to think of feedback groups (or feedback group sub-varieties such as witness evaluation groups or graphics-testing groups) as process tools for consultants to use in the performance of their service to client attorneys. These are to be distinguished from evaluation experiments such as multi-group focus studies, on-line surveys, or multi-jury mock trials. Process tools are more informal and less difficult to utilize and thus less expensive than evaluation tools. These latter usually are of significantly larger scale and are executed in a fashion that adheres at least generally to the rules of experimental design.

When a client attorney asks, “Do you think they will like my expert witness?” a consultant might be willing to say what he or she thinks, but may want first to test that thinking by use of a process tool: showing and discussing a video of the witness with a small group of surrogate jurors. If a client attorney asks, “What are our chances of prevailing on liability?” the question is much harder, since it is asking for something that is most aptly provided by a research tool of the evaluative type. Process tools can’t predict anything with the reliability and validity required of serious prediction-of-opinion research. But, process tools are nevertheless highly useful in the course of developing the case. Want to understand what might come up for at least some jurors when a certain expert goes to the stand? Go talk to ten of them about it for two or three hours – before you prep that expert!

Feedback groups are best when stripped of any pretense to be more than they are. The number of subjects is too small to do statistics, so why be tempted? Questionnaires, if utilized, should strictly avoid asking for responses that lend themselves to quantification. In other words, a consultant who wants to get some initial feel for the viability of a theme for a case shouldn’t ask yes/no or ratings questions which beg to be counted. Counting can create illusions.

Questionnaire items such as the following create a risk:

“Please check “yes” or “no” below: Based on what you heard, do you think Acme should win this case against The Widget Corp?”

Let’s say that seven of the ten people in the group check “yes”. Those “yes” and “no” boxes are likely to get counted, and somebody, maybe even a well-meaning consultant, might be just a little bit persuaded to think that 70% of jurors will love Acme’s case. As any statistician will explain, descriptive quantification, even at this level, can be an invitation to generalization. Such generalization isn’t merely risky. It’s wrong. The likelihood is quite high that chance alone could produce a result of this sort in a group of only ten people. The information gained from the perspectives of the individual group participants is what is of value here, not the rate at which those perspectives occur.

The smarter course with a small group is to use a questionnaire restricted to open-ended questions that invite surrogate jurors to think privately about the topic and then write down their thoughts. This is ultimately more informative and, from a research validity standpoint, more sound. Just a few carefully considered general questions will stimulate a great deal of thinking on the part of the surrogates. The resulting discussion is benefited greatly by the surrogates’ having organized at least some of their thoughts as responses to the questionnaire. The jurors’ questionnaire responses might serve as notes for the consultant and the client attorney, but, hopefully, they will not be seen or treated as a score sheet.

Importantly, feedback groups should also not be lent unintended importance by the generation of a document titled a “Report” or a “Report of Findings”. Just as with quantification, titling of documents can give the contents value they do not deserve. Familiar titles and terms such as “Report of Findings” or “Focus Group Report” or “Research Report” all can generate inappropriate expectations. Such skewed expectations can lead to selective reading and digestion of the material. The end result just might be a client who thinks he has learned more than he actually has learned.

A good record of a feedback group might at most be a short memo recounting interesting elements of the conversation between the consultant and a group of subjects. Short in length and styled as a memo, it is a perfect match for that which it describes. Useful and potentially stimulating information may have been generated, but the information is purely qualitative. It is “thought food”. As said earlier, it makes the consultant smarter, and it makes the consultant more able to provide quality input that is current and on-point to the client attorney.

Clients who want to give their attorneys a sufficient budget for thorough preparation of the case probably should be willing to authorize the expense of one or more feedback groups for the use of the trial consultant retained on the case. The price is “right”, and there can be terrifically useful material generated in a very short time. The only significant expense over and above the consultant’s time is the recruiting cost for around ten surrogate jurors. There should be no major questionnaire development or analysis costs and, as suggested above, virtually no report-generation costs. Video-related costs can be eliminated as well. There is no real need for video-recording of feedback sessions, unless the consultant or the attorney client wishes to forgo note-taking of their own during the “live” conversation and make notes later from a review of the video. In such an event, informal video-recording tools should be sufficient. No expensive camera rentals, no video-recording technician needed.

Feedback groups are also less expensive than larger scale evaluative tools because they require much less active participation from the lead trial counsel or other senior attorneys in the case. Does this impact the work-product protection? Consider this: Feedback groups are of course done only at the direction of trial counsel, are based upon materials he or she has provided, and are reflective of the issues in which counsel is interested. While attorneys must always evaluate confidentiality questions for themselves when it comes to the work of consultants, it is this author’s experience that few attorneys see feedback group work as anything other than typical “yellow-pad” attorney work product produced for the attorney by another, whether it be a paralegal or a retained consulting expert. For this reason, many consultants and attorneys are comfortable with the idea of feedback group work being conducted without any attorney presence. More cautious lead attorneys and the consultants with whom they work might elect to have an associate well versed in the case oversee and assist as a resource in the feedback session. In either event, the expense is much less than it would be if the lead attorney were heavily involved, as in a multi-jury mock trial.

The feedback group, stripped of the burden of expectations that exceed its capabilities, can then be a wonderfully helpful process tool for trial consultants and those whom they serve. It should be an option to consider when cost concerns prohibit larger scale research. While it cannot ever predict the predisposition of a venire or the outcome of a trial, it can often foreshadow at least some aspects of juror thinking. For that alone, it is immensely valuable.


Allan Campo is a trial consultant based in Birmingham, AL. A long-time practitioner specializing trial strategy and witness preparation in civil litigation, he is a former President of ASTC. For more about Allan, visit his firm website at my-ajc.com.