Introduction

These days people are plugged in around the clock – researching nearby restaurants, trolling social media, seeing when the next bus will arrive. Access to as much information as one can sift through is expected to be constant and readily available. This access is also expected to be ongoing. Most of us have had the experience of walking into a room full of people and being greeted by eye contact with exactly no one as they all feverishly tap away at their screens.

Given our tech-heavy culture, it is difficult enough for some people to sit through a jury trial, removed from their smartphones and tablets all day. Add to that the fact that jurors are asked to refrain from researching the case throughout trial – yes, even in their free time. Absurd! But those are the rules, and knowing who will follow those rules is becoming increasingly important.

A recent survey of 494 district court judges found that in the past two years, almost 7% had caught jurors using the internet to do research during a trial (Dunn, 2014). Given smartphone technology and the sheer number of people who use the internet regularly this may seem like a small number, but because these infractions are difficult to detect, the extent of juror misconduct is undoubtedly a more serious problem than this figure suggests.

These infractions can cause serious issues for the courts, from having to remove the juror from jury service to declaring a mistrial. Researchers have identified a number of legal remedies to encourage jurors to obey judicial instructions about going online during trial (Zora, 2012), but the jury is still out as to whether these reforms are successful.

So who are the jurors who simply cannot stay off the internet during trial? Haven’t they heard what happened to other curious cats? The media is filled with tales of jurors sentenced to jail time, forced to pay fines, or in one particularly clever punishment, sentenced to more jury duty. We suspect that some jurors are more likely than others to follow judicial instructions regarding internet use. To test this idea, we developed the Juror Internet Research Scale (JIRS) to distinguish between those who are likely to follow the rules and those who will not. Our hope is that trial consultants and attorneys can utilize this scale to identify those jurors who are most likely to engage in internet research – a valuable tool in cases that have received media attention, particularly if the information jurors dig up is especially damning or is inadmissible in court.

Methodology

Juror Internet Research Scale Development

Before the JIRS could be accepted for general use, we had to first validate the scale to see if it was measuring the concept we intended. Scale validation is done by generating more scale items than necessary, then weeding through statistically to select only the items that are related but not identical (and thus repetitive). To validate the scale, we generated potential scale items through a review of the available literature on jury instructions regarding internet research during trial. Initially, 27 items were identified and tested.

Study 1: Testing a Student Sample

The scale was first tested using a sample of 221undergraduate student participants. Participants completed the 27 item JIRS as well as six additional scales used to validate our new scale – or in other words, to test whether the scale measures what it intends to measure. Two scales were used to establish convergent validity, or the extent to which scales that should be related theoretically, are in fact related statistically. We used a measure of self control and a measure of perceived obligation to obey the law.[1] The idea is that the JIRS would be inversely related to a scale measuring self-control and a scale measuring obligation to obey the law, as rule breaking would run counter to these behaviors and attitudes. This would provide theoretical support that the JIRS is measuring the construct of juror rule breaking through internet use.

Three scales were used to establish discriminant validity, or the extent to which scales that theoretically should be unrelated are actually statistically unrelated. Here, we used measures of life contentment, religious faith, and general happiness.[2] Our thought was that the JIRS would be unrelated to these measures altogether, resulting in no or very little correlation, because there is no theoretical reason why juror internet use should be related to life contentment, religiosity, or happiness.

Finally, to assess the tendency to answer questions in a socially desirable way, participants completed the Marlowe-Crowne Social Desirability (MCSD) scale (Crowne & Marlowe, 1960).

To test the JIRS, we conducted an exploratory factor analysis to identify the best items – those that were related but not identical, and those that produced consistent participant answers throughout.[3] Following factor analysis, we kept 10 of the original 27 items for the final version of the JIRS.

Next, we conducted a second factor analysis on the 10 item final version of the JIRS. The results showed that the scale as a whole measured a single construct, and that this single factor explained over 67% of the variability in answers – in other words, 67% of the ways participants responded to the statements can be explained by one factor (which we hypothesize is how they feel about internet research). The factor loadings were all very good, indicating that each of the 10 individual items on the JIRS was measuring a single construct. The internal reliability of the JIRS was excellent (α = .95), meaning that respondents answered items consistently throughout.

We also found, as predicted, that the JIRS was negatively correlated with obedience to authority (r = -.23) and self-control (r = -.21), p < .01. In other words, as obedience to authority and self control decreased, one’s willingness to conduct internet research increased. In support of discriminant validity, the JIRS was not correlated with life contentment (r = -.10), general happiness (r = -.03), or religious faith (r = -.11). A small positive correlation was found with the measure of social desirability (r = .18), p < .01.

Overall, the results from this sample were very good. We found that our scale could be useful in predicting who would be more and less likely to do outside research during a trial. Namely, we found support that those low in self control and low in obedience to authority would be more likely to do internet research, while those high in self control and high in obedience to authority would be less likely to do internet research. So, we moved to Study 2.

Study 2: Testing a Community Sample

In order to re-test the student sample results, we tested the JIRS with a community sample of 237 participants recruited through Amazon’s Mechanical Turk. Other researchers have found that Mechanical Turk participants provide reliable data for use in academic research (Buhrmester, Kwang, & Gosling, 2011). Gathering data from both a student and a community sample adds to the generalizability of the JIRS.

We conducted an exploratory factor analysis with the community data on the same 10 item, final version of the JIRS tested in the student sample. The community results mirrored the student results. Again, the items loaded on one factor and this factor accounted for over 71% of the variability in answers. The items measured this single factor, as seen by good to strong factor loadings. The respondents answered consistently, making the internal reliability of the JIRS excellent (α = .95).

Replicating the results of the student sample, the JIRS was again significantly negatively correlated with obedience to authority (r = -.22) and self-control (r = -.23), p < .01. In support of discriminant validity, the JIRS was not significantly correlated with life contentment (r = -.08), general happiness (r = -.10), or religious faith (r = -.01). A small negative correlation was found with the measure of social desirability (r = -.23), p < .01.

Thus, the student sample results were replicated with the community sample. This serves to increase our confidence that the scale can be used to accurately identify jurors at high-risk of doing internet research.

How to Use the JIRS

The measure is scored by creating a sum score of a participant’s answers on the 1 to 6 scale as shown below. However, only text responses should be shown on the measure given to participants as numerical representations of the scores could indicate a higher value given to agreeing with certain statements. Thus, these numbers are in parenthesis for explanatory purposes only. High scores on the measure indicate a higher likelihood of doing online research. Scores range from the lowest possible score of 10 and the highest possible score of 60.

The JIRS is below:

Listed below are a number of opinions. Read each item and decide whether you agree or disagree that this statement reflects your beliefs and to what extent.[4]

Strongly Disagree

(1)

Disagree

(2)

Somewhat Disagree

(3)

Somewhat Agree

(4)

Agree

(5)

Strongly Agree

(6)

1. I would look up the parties in the case online to try to find additional information about them.

2. Finding additional information about the case online would be more helpful than harmful.

3. If I don’t understand something the attorneys have presented, I would look it up online.

4. I would try to find relevant and helpful information online that may be withheld during the trial.

5. If I can’t ask questions during the trial, I would look up the information online.

6. I would use the internet to find out the forbidden information judges don’t want me to find.

7. I would do extra research online because it would help me make the best decision in the case.

8. It would be wrong to do even a quick internet search for additional information during the trial. (Reverse scored)

9. I would do additional research online if I thought it would help me better understand the case, even if the judge asked me not to.

10. I would be curious to see what I could find online about the parties in the trial.

 

Does it Work?

Results from our studies provide support for use of the JIRS to identify jurors who are prone to break the rules when it comes to internet research. The statistical analysis found the JIRS, as a whole, measures one construct, and that the items used to measure this construct are reliable in producing consistent responses. This suggests the JIRS items were all measuring the same construct – the likelihood of juror internet research.

These results suggest that the JIRS may be useful in distinguishing jurors who will follow judicial instructions to avoid internet research from those jurors who are more likely to break the rules. It represents an important development in combatting a thorny problem for parties to a lawsuit, attorneys, and courts alike.

As with all newly developed scales, there is work to be done to increase the usefulness of the JIRS. Future research will seek to create a short version (three or four items) for use in voir dire when a very limited number of questions are allowed. Also, measuring the correlation between the JIRS and two recently published scales about smartphone and social media use[5] would provide further reason to believe that the JIRS measures heightened likelihood of conducting internet research. Future research should also continue to validate the JIRS with different populations to increase generalizability.

We hope and believe that the JIRS has a bright future. Trial consultants and attorneys can use it as a tool during voir dire, particularly in high profile cases or cases which have received extensive pretrial publicity, to identify jurors who are likely to conduct outside research during trial. If jurors have high scores on willingness to conduct outside research and other high-risk traits, there may be reason to remove that person from the jury pool. As courts, clients, attorneys, and trial consultants continue to seek ways to combat this issue, the JIRS can protect parties from exposure to negative, misleading, or incomplete information that jurors obtain online.

Please direct questions to Alexis Knutson at alexis.knutson@tsongas.com.


Alexis Knutson, M.A., is a Research Associate for Tsongas Litigation Consulting, Inc. Her work with Tsongas encompasses focus groups, community attitude surveys, mock trials, shadow juries, and post trial juror interview research. Her professional and research interests include juror decision making in civil litigation and juror use of social media and the internet during trial. She is a member of the American Society of Trial Consultants.

Edie Greene, Ph.D., is Professor of Psychology and Director of the Graduate Subplan in Psychology and Law at the University of Colorado Colorado Springs. She is lead author of Psychology and the Legal System (2014) and Determining Damages: The Psychology of Jury Awards (2003) and is at work on a coauthored book entitled The Jury under Fire: Myth, Controversy, and Reform. She has published approximately 80 articles on legal decision making in scholarly journals, law reviews, or edited books.

Robert Durham, Ph.D., has been a professor of psychology at the University of Colorado Colorado Springs for 43 years. He has taught a variety of topics, many centering on methodological issues (e.g., program evaluation, research design, multivariate statistics, and psychometric theory). He has authored and coauthored over 90 publications and presentations.


 

References

Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechanical Turk: A new source of Inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6, 3-5.

Crowne, D. P., & Marlowe, D. (1960). A new scale of social desirability independent of psychopathology. Journal of Consulting Psychology, 24 (4), 349-354.

Dunn, M. (2014). Jurors’ and attorneys’ use of social media during voir dire, trials, and deliberations: A report to the Judicial Conference Committee on Court Administration and Case Management. Federal Judicial Center. Retrieved from http://www.fjc.gov/public/pdf.nsf/lookup/jurors-attorneys-social-media-trial-dunn-fjc-2014.pdf/$file/jurors-attorneys-social-media-trial-dunn-fjc-2014.pdf

Lavallee, L. F., Hatch, P. M., Michalos, A. C., & McKinley, T. (2007). Development of the Contentment with Life Assessment Scale (CLAS): Using daily life experiences to verify levels of self-reported life satisfaction. Social Indicators Research, 83 (2), 201-244.

Lyubomirsky, S., & Lepper, H. (1999). A measure of subjective happiness: Preliminary reliability and construct validation. Social Indicators Research, 46, 137-155.

Plante, T. G., & Boccaccini M. (1997). Santa Clara Strength of Religious Faith Questionnaire. Pastoral Psychology, 45, 375-387.

Przybylski, A. K., Murayama, K., DeHann, C. R., & Gladwell, V. (2013). Motivational, emotional, and behavioral correlates of fear of missing out. Computers in Human Behavior, 29, 1841-1848.

Tangney, J. P., Baumeister, R. F., & Boone, A. L. (2004). High self-control predicts good adjustment, less pathology, better grades, and interpersonal success. Journal of Personality, 72 (2), 271-322.

Tyler, T. R. (1990). Why people obey the law. Chelsea, MI: BookCrafters, Inc.

Yildirim, C., & Correia, A. (2015). Exploring the dimensions of nomophobia: Development and validation of a self-reported questionnaire. Computers in Human Behavior, 49, 130-137.

Zora, M. (2012). The real social network: How jurors’ use of social media and smart phones affects a defendant’s sixth amendment rights. University of Illinois Law Review, 577, 1-16.

[1] The Brief Self-Control Scale (BSCS) assesses participants’ perceived ability to control their own actions (Tangney, Baumeister, & Boone, 2004); the Perceived Obligation to Obey the Law (POOL) scale assesses the extent to which participants believe they must obey the law (Tyler, 1990)

[2] The Contentment with Life Assessment Scale (CLAS) assesses participants’ contentment with their own lives (Lavallee, Hatch, Michalos, & McKinley, 2007); the Santa Clara Strength of Religious Faith Questionnaire (SCSRFQ) assesses the centrality of faith in participants’ lives (Plante & Boccaccini, 1997); the Subjective Happiness Scale (SHS) assesses participants’ happiness with their lives (Lyubomirsky & Lepper, 1999)

[3] Items with factor loadings of < .70

[4] The introductory paragraph to the JIRS for the student and community samples was as follows, so participants in each study knew they had been instructed not to conduct internet research: “Listed below are a number of statements that describe attitudes about instructions given by a judge to jury members during a trial. These instructions ask jurors to refrain from conducting online research about the trial and its parties. Please assume the role of juror who has been given instructions to refrain from conducting online research. There are no right or wrong answers, only opinions. Read each item and decide whether you agree or disagree that this statement reflects your beliefs and to what extent.”

[5] The Nomophobia scale measures individual’s attachment to their mobile device (Yildirim & Correia, 2015); the Fear of Missing Out (FOMO) scale measures individual’s discomfort when they feel their social circle is doing something they are not (Przybylski, Murayama, DeHaan, & Gladwell, 2013)


Dr. Merrie Jo Pitera is CEO of Litigation Insights — a jury research and visual communications company. In her 25 years as a jury consultant, she provides to attorneys and their clients assistance with theme development, witness preparation, jury research (e.g., focus groups/mock trials) and jury selection. http://www.litigationinsights.com/

Response to the Research Article, The Juror Internet Research Scale (JIRS): Identifying the Jurors Who Won’t Stay Offline

The authors try to solve a recent problem that has surfaced in the court system in the last several years – i.e., jurors using the internet to research the cases they have been seated for or posting/tweeting about the case. As a practicing trial consultant for the past 25 years, I have seen, since the internet became a regular part of our routine, anecdotal evidence of just this type of behavior – from a juror disgruntledly posting about his jury service to one doing research on legal terms she didn’t understand, to another this past September tweeting after the judge admonished the venire four different times during the first day of jury selection not to tweet, post or do research. Unfortunately for this latter juror, he was led out in handcuffs and thrown into “holding” until jury selection was over. This was an embarrassing moment for this juror, but also a wake-up call for him and his fellow jurors that this judge meant business about obeying his orders. Needless to say, none of the other jurors broke that order for the rest of the trial. Finding ways to identify jurors who won’t follow the rules and will ignore authority is an important goal to help preserve the trial process, which should be based on evidence and not based on the influence of social media posts and other research found online.

The Juror Internet Research Scale (JIRS), while in its infancy, is an interesting start to “distinguish between those who are likely to follow the rules and those who will not.” Given the realities of the courtroom, each party is trying to identify those jurors who will not be receptive to their case themes/narrative. As a result, each side encourages jurors to talk about their opinions, so that those not favorable to their side can be hopefully removed with a cause challenge. It would seem failing to follow the judge’s instructions about internet use falls into this same category. That is, the ideal use for this scale is identification of those jurors who will break the rules regardless of what the court instructs them not to do, so that a cause challenge can be made to remove the juror. Right now, I believe this scale achieves half of the research goal it set out to measure. That is, it only identifies if a juror will engage in the internet research behavior, but unfortunately, it does not help identify a series of questions strong enough to raise to a cause challenge – i.e., that regardless of the court’s instruction, they will engage in the behavior anyway. With that said, I offer a few improvements to make this scale more rigorous and practical for use during voir dire and also two recommendations for its current use.

Limitation: Need More for a Cause Challenge

First, the most important difference between this scale and the actual courtroom is that in court, we do have judges admonishing jurors from the get-go – as soon as they walk into the courtroom, at every break and at the end of each day. This occurs each day court is in session. From the court’s and each party’s perspective, we need to know if a juror will fail to abstain from getting on the internet even after admonished not to do so. By only asking jurors their internet behaviors as indicated by the items in the JIRS scale, these questions, taken together or alone, are not enough to raise a red flag that a juror could be the one who disobeys the judge’s order and potentially causes a mistrial. Therefore, I suggest the next step to making this scale more practical and useful in the courtroom is to incorporate the construct that measures jurors’ willingness to obey the law.

Now in all fairness, the authors validated the scale by correlating it to other scales, in particular the Perceived Obligation to Obey the Law (POOL) scale[1]. Unfortunately, in the courtroom, trying to use this explanation of the scale is a difficult cause challenge argument to make to a judge. Based on my years of experience in front of dozens of judges, if you have a juror who answered a series of the JIRS questions in the appropriate direction, it is just not strong enough to garner a cause strike by arguing “The literature suggests 67% of the variance is accounted for and the scale was correlated with the POOL scale; therefore, this series of questions makes someone more likely to engage in this behavior.” Specifically, “more likely to engage” in the legal world are not the magic words alone to gain a cause strike on a juror. We need more – we need to show bias. A juror needs to indicate that regardless of the judge’s instructions, he/she will engage in this internet behavior. That begins to show bias for counsel to ask the appropriate follow up questions to close out the cause strike. Therefore, I propose the next step would be to incorporate an “obligation to obey the law” second construct to not only make this survey more rigorous, but also more practical for use in the courtroom.

Limitation: Who the scale will not catch

Not all jurors set out to intentionally disobey the judge’s order. There are jurors who do not appreciate the wide net this admonition covers. We have seen jurors who didn’t realize going to Wikipedia or Webster’s dictionary online to research a definition was improper. These jurors have every intention to follow the court’s instructions, and do not see their behavior as true infractions. Therefore, in voir dire, these jurors would be overlooked since they would provide answers that would not indicate they would overtly violate the court’s order. And actually, the suggested revision to the scale would probably miss these folks too.

Recommendations:

  • As the scale currently stands, Question 9 (“I would do additional research online if I thought it would help me better understand the case, even if the judge asked me not to.”) is currently the most practical and useful of all of the questions in the JIRS. That is, in voir dire, if a juror responded affirmatively to this question, the attorney or Judge could ask the necessary cause-sequence follow up questions in order to establish enough bias to move for a cause strike of that juror.
  • I also agree with the authors that this scale would be more useful if they whittle the questions down to just a few, as adding all of the JIRS questions to a Supplemental Juror Questionnaire or asking these series of questions in open court is, in most cases, not feasible primarily because of time restrictions, the length of an SJQ, and/or a judge’s preferences. Also, as previewed above, while these series of questions offer one construct as any good scale needs to do to show the questions are measuring what they are intended to measure, I believe adding in the second construct measuring jurors’ willingness to obey the law or not would be most beneficial to the primary end goal for this scale (i.e., identifying jurors who will disobey the law regardless of a judge’s instruction).

Conclusion

The JIRS is a good first step in identifying jurors’ internet behaviors. Above I offer next steps for this scale to evolve to make it more practical for use in the courtroom. And if achieved, this will be a helpful tool in any attorney’s voir dire toolbox to help identify jurors who might go rogue and ignore the court’s instructions.


[1] “[A]s obedience to authority … decreased, one’s willingness to conduct internet research increased.”


Mark Bennett is a Houston writer, improvisational actor, mechanic, cook, hacker, and board-certified criminal-defense trial lawyer who figures that if he isn’t pissing off most of the world, he isn’t doing his job. He blogs at http://blog.bennettandbennett.com.

Here’s what I’d rather see you work on…

First, the curious amateur’s comment on the science: The article makes no link between the JIRS and actual behavior. It appears to measure something, but whether that something is jurors’ tendency to do internet research against the judge’s instruction is untested.

Now the trial lawyer’s question: even supposing that further investigation shows that the JIRS measures jurors’ tendency to do internet research against the judge’s instruction, once we “identify those jurors who are most likely to engage in internet research” how do the authors imagine that lawyers might use this information?

That one juror is more likely than others to engage in internet research does not justify a challenge for cause. Nor may the court give special instructions to such a juror that it does not give to other jurors.

Assuming that JIRS measures a juror’s likeliness to engage in internet research, a high JIRS number might in some extraordinary case lead to a peremptory challenge. For example, where there is lots of ugly stuff online about my client and my greatest fear is that it will infect the jury, I would use JIRS as a basis for peremptory challenges. But that would be a highly unusual case: there are usually much scarier things in a trial than the possibility that the jurors will do internet research, and almost always I’d choose jurors who favor me but are likely to do unsanctioned online research over jurors who favor the other side but are unlikely to do break the rules. I suspect that most lawyers interested enough to consider using the JIRS in trial would agree with me that there are better uses for peremptory challenges.

In fact, if there is—as it appears there may be—some negative correlation between internet research and obedience to laws, I might fight to keep the high-JIRS juror on the jury on the theory that JIRS is a proxy for resistance to authority. At the same time I would try to find the best way to keep all of the jurors from doing legal research—something that we should be doing anyway, and perhaps a more fruitful use of research time.

At the end of jury selection, when we’re down to our last peremptory challenge and have a choice between a low-JIRS juror and a high-JIRS juror, all else being equal I might use that number to decide; I can imagine the decision going either way—I might choose the low-JIRS juror because I want to minimize the chance that inadmissible online facts will taint deliberations, or I might choose the high-JIRS juror because she’s less authoritarian (in which case a nearer measurement of authoritarianism might have served me better).

The authors appear to have a reasonable foundation for more study, but I question the value of the JIRS. There are many measurable traits more interesting to trial lawyers than likelihood that a particular juror will do online research; the authors mention several. If you want to help trial lawyers, help us figure out which jurors are fudging their answers to look better. Help us figure out which are more likely to do the government’s bidding just because it’s the government’s. Help us find the malcontents. Or, if you are determined to study unauthorized online research by jurors, find the best way to discourage it.


Response from Author:

I appreciate the thoughtful reflections from Dr. Pitera and Mr. Bennett.

As described by Dr. Pitera, use of the JIRS in a supplemental juror questionnaire (SJQ) is ideal. This allows attorneys or consultants to evaluate this information in combination with other high-risk traits to determine if certain jurors warrant a peremptory challenge. However, as Dr. Pitera also points-out, a cause challenge could be used to remove the juror if a juror admitted that regardless of jury instructions, they would do online research. In fact, in a jury selection I attended recently, jurors explicitly admitted an inability to follow the judge’s instructions to be fair and unbiased, despite the judge’s encouragement that they would surely follow the law. If found to be high-risk based on the JIRS and asked individually to clarify their responses, there may be a subset of individuals who would openly admit their own likelihood of doing online research.

Dr. Pitera also indicates that this scale would not catch jurors who unintentionally do outside research, and that is true. Perhaps this issue could be addressed by more thorough, varied, or frequent instructions admonishing jurors from doing outside research. The judge may consider including examples of common “grey-area” research avenues (such as use of Wikipedia) to explain that those are restricted as well.

In response to Mr. Bennett’s suggestion, unfortunately there is no way to empirically test who will actually ignore the judge’s instruction and do online research. Much as we might like, we can’t follow jurors to eavesdrop on their behavior and compare it with the JIRS predictions. This makes the issue especially hard to study. Although we have explored opportunities to administer the scale and questionnaires to jurors following jury service, judges have understandably been hesitant to allow their jurors to participate.

On the point that attorneys may deliberately opt to keep jurors high on internet usage if they would be favorable to the attorneys’ clients: we agree. Perhaps the real beneficiaries of the JIRS are the judges who make the call on for-cause challenges.

Finally, Mr. Bennett suggests additional research to identify the best ways to stop all jurors from doing extrajudicial research, and finding such a solution is certainly of high priority. There is currently much discussion in the field of trial consulting aimed at identifying remedies that could be implemented to combat this issue. However, I suggest in this article that there are those who will break the judge’s rules no matter what. As we have seen in media reports of misbehaving jurors, even when steps are taken to discourage outside research, some do research anyway. Those are the individuals this scale attempts to identify.