The polygraph, an instrument designed to identify deception, first entered the American courtroom more than 90 years ago. In Frye v. United States (1923), the D.C. Circuit Court excluded expert testimony about the findings from a polygraph. The court noted that the “systolic blood pressure deception test, ” the polygraph, had “not yet gained such standing and scientific recognition among physiological and psychological authorities as would justify the courts in admitting expert testimony.…”
Since then, the polygraph and its modern incarnations have continued to incite legal controversy and debate. The public, press and fact finders are no less fascinated with the polygraph now than they were in the beginning of the twentieth century (Keeler, 1930; Myers, Latter, & Abdolahhi-Arena, 2006). Overwhelmingly, courts have banned results of polygraph testing in criminal proceedings (United States v. Scheffer, 1998). The reasoning for this has largely centered on lack of general acceptance in the scientific community and concerns about prejudicial impact of the findings on the jury (Myers et al., 2006). Nevertheless, the polygraph continues to be widely used by law enforcement, in employment screenings, and for specific types of forensic assessments, such as sexual offender evaluations (Grubin, 2010). Accordingly, litigators, corporate counsel, and trial consultants need to have a current understanding of the scientific underpinnings of the polygraph, the improvements to the instrument throughout the decades, and the ongoing controversies regarding the interpretation of results.
At its core, the polygraph is a measure of a person’s arousal or “fight-or-flight” response. This response is regulated by the sympathetic nervous system (SNS) and is activated during periods of perceived stress. The SNS produces changes in pupil diameter (dilation), increased heart rate and sweating, and constriction of the blood vessels, among other functions. A person engaged in deception is presumed to experience sympathetic arousal due to general anxiety, fear of detection, and worries about the consequences of getting caught. Accordingly, the polygraph was designed to measure changes in the SNS, specifically in systolic blood pressure. This activation was believed to progress along a well-documented curve corresponding to specific periods of the examination and could be distinguished from fear of the examination itself.
Many of the refinements to the polygraph technique have come in the systematizing of signals used to capture the sympathetic response. To boost the ratio of signal to noise (i.e., error) and avoid the problem of having poor tracking of any one mode of detection, early methods came to rely on signals in multiple systems. In addition to the circulatory variables (e.g., heart rate and blood pressure), signals from the respiratory system (e.g., breathing rate, depth, and regularity) and skin (e.g., body temperature and sweating) were integrated to create a multimodal method of detecting deception.
Arguably, the most significant improvements to the polygraph were not in technological advances, but in the standardization of the interview process. David Lykken, Ph.D., a psychologist and former professor of psychiatry at the University of Minnesota, introduced a method of examination known as the Guilty Knowledge Test (GKT; Lykken, 1960). The GKT relies on the assumption that the examinee either has or does not have knowledge of an event that only the guilty party would have. The examiner must also know this “guilty knowledge.” In practice, this knowledge often comprises trivial details of a crime or crime scene that were not disclosed publicly, but which would be known to someone present (e.g., the color of a dress or brand of cigarettes used by the victim). In the setting of a polygraph test, elevations in sympathetic nervous system tone in response to items of which the examinee should have no direct knowledge are seen as supporting the possibility that the examinee may be lying. However, since changes in the sympathetic tone during the examination could still be due to general anxiety, the polygraph examination assesses responses to the guilty knowledge in comparison to other control questions.
A typical polygraph examination starts with a pre-test interview to gain some preliminary information, which will later be used for “Control Questions”, or CQ. Then, the tester will explain how the polygraph is supposed to work, emphasizing that it can detect lies and that it is important to answer truthfully. Then a “stimulation test” is often conducted: the subject is asked to deliberately lie and then the tester reports that he was able to detect this lie. Then the actual test starts. Some of the questions asked are “irrelevant” or IR (“Is your name John Smith?”), others are “probable-lie” CQs that most people will lie about (“Have you ever stolen money?”) and the remainders are “Relevant Questions “, or RQ (“Did you steal government secrets?”). Question types are alternated throughout the test. The test is “passed” (the subject termed non-deceptive) if the physiological responses during the probable-lie control questions (CQ) are larger than those during the relevant questions (RQ). While this modern version of the polygraph examination still leaves numerous details to the discretion of the examiner, the incorporation of a standard paradigm in the Guilty Knowledge Test (GKT), assessment using a standardized index of sympathetic tone, and a method to calibrate the magnitude of the “relevant” response allows for systematic evaluation of the approach.
Is the Polygraph Valid?
Research on the validity of the polygraph has yielded widely divergent rates of accuracy in detecting deception, some as low as chance and others as high as 95% (Grubin, 2010). This variability can be attributed to lack of a standardized research protocol, lack of consistency in defining terms, the use of different instruments, testing errors, diverse populations. It may also be attributed to the unreliability and invalidity of the polygraph as a whole.
One such persistent critique of the instrument highlights the fact that because each examination requires some individuated questions, this effectively negates the scientific practices of standardization and replication. In response, some researchers have begun to show the efficacy of the polygraph even when standard questions are employed. A 2007 study by Offe and Offe (2007) attempted to show that standardization did not diminish the accuracy of polygraph findings. They enrolled volunteers in a mock crime study and permitted them to decide whether they wanted to participate as guilty or innocent subjects. They used the standard CQT interview method described above, in which relevant questions (RQ) were compared with control questions (CQ). As previously noted, the basic assumption of this method is that guilty subjects will have a greater physiological response to the RQ than the CQ, and that the innocent subjects will have a higher response to the CQ than the RQ. In a typical pretest interview, the CQ is explained as equally significant for the test result in order to shift the focus of concern onto the CQ for innocent subjects, as their broad and vague phrasing is difficult to negate. Among other conclusions, this study demonstrated that explanation of the CQ in the pretest interview resulted in a higher identification rate for guilty and innocent participants, with an overall correct classification rate of 93.3%. Furthermore, the study results indicated that careful calibration of the CQ to an individual subject was not relevant for correct classification, eliminating the need to tailor the CQ questions in an unstandardized manner.
In 2008, Horvath and Palmatier conducted a study that further refined the optimum form for control questions. In that study, participants in a mock theft scenario were randomly assigned to guilt or innocence and were given either exclusive or “time bar” control questions (“Before you were 21, did you ever…?”) or non-exclusive or “no time bar” control questions (“Did you ever…?”) Non-exclusive control questions were significantly more effective than exclusive questions for both guilty and innocent subjects, with accuracy rates of 85% and 91% respectively (Horvath & Palmatier, 2008).
As with any research study, the two described above have notable limitations. First, participants should ideally be randomly assigned to groups so as to minimize any self-selection effects. Even if they were, as was the case with the Horvarth and Palmatier (2008) study, individuals engaging in research studies (most commonly college students) are often notably different from those involved in the criminal justice studies. Individuals in the criminal justice system have disproportionately high rates of mental illness, substance abuse, and developmental disabilities, all factors that may impact physiological response patterns. Second, the study participants were engaged in mock crime scenarios and therefore the generalizability, or the application of these findings to other settings, is likely limited. This is a significant issue that unfortunately restricts much of social science research from being easily translated to real-life settings, and it may be especially problematic in deception research. Specifically, anxiety in respondents who are feigning guilt may be notably different then those engaged in “real-life” lying. Furthermore, only one type of lying is generally tested in research – denial of the truth. In reality, individuals being questioned may obfuscate their answers, exaggerate or minimize responses, describe events that had occurred prior to time in question, report on hopes or dreams, or simply misremember events. At this time, research has not examined the efficacy of the polygraph to differentiate between different types of lies and the truth.
Finally, the findings from this research may be skewed due to an inflated base rate (Rosenfeld, Sands, & Van Gorp, 2000). A base rate is the proportion of individuals in the population at large who have a particular condition; in this context, the base rate represents people who are going to lie. Naturally, we can never know the precise base rate of lying, but it is likely to be lower than 50% (as is the case with most research). One relevant estimate might be found in the research on malingering, or the feigning of symptoms for secondary gain. Estimates of malingering in criminal settings are generally around 15% and in civil settings approximately 30% (Mittenberg, Patton, Canyock, & Condit, 2002). In contrast, polygraph studies often have a base rate of 50%, with half the participants instructed to deceive. Therefore, even if a polygraph is 90% accurate in a research sample, a much smaller percentage of the actual population will be accurately identified in real-life settings. As the base rate of the phenomenon decreases (i.e., likelihood of lying), it becomes increasingly more difficult to accurately detect the condition. Hence, the actual percent reported in studies is likely to be, at best, somewhat misleading.
Techniques designed to defeat the accuracy of the polygraph – so-called countermeasures – tend to fall into two basic groups: the suppression of reactions to the relevant questions, and the simulation of inflated reactions to the control questions. In 1994, Aldrich Ames, who was later convicted of spying for the Soviet Union, passed a polygraph test prior to his admission of guilt by resting and relaxing prior to the exam, developing a rapport with the examiner, and trying to maintain calm during the test (Permanent Select Committee on Intelligence, 1994). His approach likely resulted in decreasing arousal in his SNS (or maintaining the same level of arousal throughout); he therefore managed to avoid detection. Other strategies include practicing controlled breathing during the relevant questions, while inducing tachycardia during the control questions, for example by thinking of something frightening or exciting or by self-induced injury (e.g., pin prick with a concealed sharp object). In this scenario, conventional polygraph analysis will not show a significant reaction to any of the relevant questions and the subject will pass the test.
In practice, the countermeasures described above might result in a respondent being judged as honest. However, other traces of deception may still be detectable. For instance, an individual may be able to suppress changes to his respiratory rate, which is readily monitored and adjustable, while still manifesting changes in other variables, such as skin conductance or pupil diameter. Bailey (2010) suggests that while some individuals are capable of reducing or eliminating a subset of the polygraph response, the volitional control of one signal can result in augmentation of other signals, often with more pronounced than expected. However, this finding has not been investigated or identified in the literature. Finally, exaggerated responses to control questions, using concealed pin-pricks, rapid breathing, or other simulated affective displays, may lead to changes in the arousal response. This too has not been addressed in the literature.
Improving the Polygraph Examination?
Even using conventional polygraph analysis, the utility of the examination can be greatly improved by adhering to key principles and heuristics. In discussing his Guilty Knowledge Test, Dr. Lykken (1998) recommended that for each probe (i.e., question of interest or RQ), an innocent subject should have no more than a 20% chance of testing positive while a guilty subject should have at least an 80% chance of testing positive. Each probe should have five alternatives that are equally plausible to an innocent subject, but, to the guilty subject, are easily distinguishable from the probe and from each other, so that the guilty subject is not confused about which alternative he or she has seen before. In addition, the fact that a guilty subject recognizes a probe “must seem important to the guilty subject,” a condition that should always be realized in a high-stakes criminal investigation context but may not be in laboratory experiments.
F. Lee Bailey, an internationally recognized trial lawyer and life member of the American Polygraph Association, has suggested that an interpretable polygraph examination requires three essential features: (1) the examinee has specific personal knowledge of the information being investigated; (2) the examinee believes or knows his own statements disavowing this knowledge to be false; and (3) the examinee is sufficiently motivated by the outcome of the examination that he will either lie or face some tangible negative consequence (Bailey, 2010). Bailey concedes that scenarios exist in which these conditions are not met and, therefore, a polygraph examination is inappropriate and would be non-interpretable. For additional information on the polygraph and administration, refer to the American Polygraph Association (www.polygraph.org).
The Future of the Polygraph
Although research on the polygraph is inconsistent and has still not gained general acceptance in the scientific community, it continues to be used by law enforcement officials, security teams, and attorneys. The conviction with which these individuals rely on polygraph-based methods in high-stakes settings indicates that one cannot easily dismiss the potential utility of the approach. Thus, while the results of polygraph examinations have had limited success in meeting courtroom evidentiary standards, they have likely affected the outcome of countless cases indirectly through altered testimony, facilitation of other forms of evidence, and out of court settlements.
As such, the polygraph represents an important method for lie detection, both in facilitating our understanding of the biological correlates of lying and in leveraging new technological development on its basic principles. In recent years, technological advances have yielded novel methodologies to identify deception, moving away from reliance on the SNS. One such an instrument is functional Magnetic Resonance Imaging (fMRI), a neuroimaging technique that uses a magnetic field to identify movement in oxygenated blood as a proxy for neuronal activation in the brain. Research using the fMRI has identified enhanced patterns of activity in various brain regions responsible for executive functioning, working memory, and integration of information as being more prominent during deception rather than truthfulness (Christ, Van Essen, Watson, Brubaker, & McDermott, 2008). Nevertheless, the fMRI technique has its own inherent limitations and, like the polygraph, has thus far been deemed inadmissible in the courtroom (Farah, Hutchinson, Phelps, & Wagner, 2014; (US v. Semrau, 2012).
Hence, the search for a perfect lie detector continues. Although our technological advances have been unprecedented in the last century, the legal perspective on allowing experts and machines to decipher lying from truth telling has remained unchanged. As noted by Justice Thomas in the US v. Scheffer case (1998):
A fundamental premise of our criminal justice system is that ‘the jury is the lie detector.’ United States v. Barnard, 490 F.2d 907, 912 (CA9 1973) (emphasis added), cert. denied 416 U.S. 959 (1974). Determining the weight and credibility of witness testimony, therefore, has long been held to be the ‘part of every case [that] belongs to the jury, who are presumed to be fitted for it by their natural intelligence and their practical knowledge of men and the ways of men.’ Aetna Life Ins. Co. v. Ward, 140 U.S. 76, 88 (1891).
Some may find comfort in knowing that secrets of the mind will continue to be outside the purview of the courts, at least for now.
Ekaterina Pivovarova, Ph.D. is a Post-doctoral Research Fellow at the Center for Law, Brain and Behavior [www.clbb.org] at Massachusetts General Hospital and a Neuroscience Research Fellow at Harvard University. Dr. Pivovarova is also a clinical forensic psychologist in private practice. Her research interests are in detection of malingering and clinical decision-making. You can learn more about Dr. Pivovarova here.
Judith G. Edersheim, J.D., M.D. is a Co-Founder and Co-Director of the Center for Law, Brain and Behavior. She is an Assistant Clinical Professor of Psychiatry at Harvard Medical School, a senior consultant to the Law and Psychiatry Service at Massachusetts General Hospital, a member of the Bar of the Commonwealth of Massachusetts, and is a Board Certified Forensic Psychiatrist. Dr. Edersheim’s research interest is in the translation of psychiatric and neurological behaviors into legal settings. Please find additional information about Dr. Edersheim here.
Justin Baker, M.D., Ph.D. is an Instructor in the Department of Psychiatry at Harvard Medical School and Associate Director of the Center for Law, Brain and Behavior at Massachusetts General Hospital. His research focuses on brain imaging studies in psychotic disorders. He also has a long-standing interest in testing the limits of mind-reading using any and all available technologies. Read more here.
Bruce H. Price, M.D. is Co-Founder and Co-Director of Center for Law, Brain and Behavior. He is the Chief of the Department of Neurology at Mclean Hospital and is an Associate Professor in Neurology at Massachusetts General Hospital and Harvard Medical School. His research interests include cognitive and bheaivorl consequences of neurological and psychiatric diseases, brain dysfunction in violent and criminal behavior, and intersection between medicine, law and ethics. Additional information about Dr. Price can be found here.
Bailey, F. L. (2010, January 21). Comments during the Center for Law, Brain and Behavior’s “The Measure of Truth and Deception: The Past and Future of Lie Detection” event.
Christ, S. E., Van Essen, D. C., Watson, J. M., Brubaker, L. E., & McDermott, K. B. (2008). The Contributions of Prefrontal Cortex and Executive Control to Deception: Evidence from Activation Likelihood Estimate Meta-analyses. Cerebral Cortex, 19(7), 1557–1566. doi:10.1093/cercor/bhn189
Farah, M. J., Hutchinson, J. B., Phelps, E. A., & Wagner, A. D. (2014). Functional MRI-based lie detection: scientific and societal challenges. Nature Reviews Neuroscience, 15(2), 123–131.
Frye v. United States. 293 F. 1013 ( D.C.. Cir 1923)
Grubin, D. (2010). The Polygraph and Forensic Psychiatry. Journal of the American Academy of Psychiatry and the Law Online, 38(4), 446–451.
Horvath, F., & Palmatier, J. J. (2008). Effect of Two Types of Control Questions and Two Question Formats on the Outcomes of Polygraph Examinations. Journal of Forensic Sciences, 53(4), 889–899. doi:10.1111/j.1556-4029.2008.00775.x
Keeler, L. (1930). Method for Detecting Deception, A. American Journal of Police Science, 1, 38. Lykken, D. T. (1960). The validity of the guilty knowledge technique: The effects of faking. Journal of Applied Psychology, 44(4), 258.
Lykken, D. T. (1998). A tremor in the blood: Uses and abuses of the lie detector (Vol. xvi). New York, NY, US: Plenum Press.
Mittenberg, W., Patton, C., Canyock, E. M., & Condit, D. C. (2002). Base Rates of Malingering and Symptom Exeggeration. Journal of Clinical and Experimental Neuropsychology (Neuropsychology, Development and Cognition: Section A), 24(8), 1094–1102. doi:10.1076/jcen.24.8.1094.8379
Myers, B., Latter, R., & Abdollahi-Arena, M. K. (2006). The Court of Public Opinion: Lay Perceptions of Polygraph Testing. Law and Human Behavior, 30(4), 509–523. doi:10.1007/s10979-006-9041-0
Offe, H., & Offe, S. (2007). The comparison question test: Does it work and if so how? Law and Human Behavior, 31(3), 291–303. doi:10.1007/s10979-006-9059-3
Permanent Select Committee on Intelligence, U. H. of R. (1994). Report of Investigation: The Aldrich Ames Espionage Case. DIANE Publishing.
Rosenfeld, B., Sands, S. A., & Van Gorp, W. G. (2000). Have we forgotten the base rate problem? Methodological issues in the detection of distortion. Archives of Clinical Neuropsychology, 15(4), 349–359.
United States v. Scheffer, 523 US 303 (1998).
United States v. Semrau, 693 F. 3d 510 (2012).
Adam B. Shniderman responds: Adam B. Shniderman is a doctoral candidate in Criminology, Law and Society at the University of California, Irvine. He specializes in the use of scientific evidence in courts, focusing on neuroscientific evidence, and following completion of his Ph.D. this spring, he will be an Assistant Professor of Criminal Justice at Texas Christian University.
The authors’ excellent primer provides insight into the process of a polygraph examination, the purported legal and scientific concerns that have kept polygraph evidence out of courtrooms in many jurisdictions, and the future of lie detection. It is worth noting that contrary to popular belief, only 29 states have articulated a per se ban, while 15 states have precedent that allows for the admission of polygraph evidence at the stipulation of both parties. (See, Shniderman, 2012, p. 442 for a list of these jurisdictions) New Mexico stands alone as the only jurisdiction to treat polygraph testimony like other types of evidence.
As the authors note, in those jurisdictions that do exclude polygraph evidence, the objections fall into two general categories: 1) validity and reliability concerns (scientific concerns) and 2) usurpation of the jury function (strictly legal/policy concerns).
Courts’ claims that the polygraph would usurp the jury function are often ambiguous and imprecisely articulated. However, these claims can be divided into two general categories. First, some courts have implied that assessing the credibility of a witness is solely within the province of the jury, and it would be impermissible to let an expert testify about the matter. The Louisiana Court of Appeals articulated this justification for excluding polygraph evidence. “The polygraph has been coined as a ‘lie detector.’ In other words, its very purpose serves to determine whether a person is telling the truth. In our legal system, this function is precisely within the trier of fact’s role” (Evans v. DeRidder, 2001).
Second, courts have expressed concern that jurors would be overwhelmed by the expert examiner’s credibility assessment. Courts fear that jurors would simply substitute the expert’s judgment for their own in a matter that is essential to any trial – witness credibility assessment. As Judge Gibson wrote in United States v. Alexander, “[w]hen polygraph evidence is offered in evidence at trial, it is likely to be shrouded with aura of near infallibility, akin to the ancient oracle of Delphi” (1975). This concern has continued to appear in court decisions decades later.
Court’s concerns about error rates, the ecological validity of laboratory studies on polygraph evidence, and its lack of general acceptance are often coupled with more general concerns about the validity of the scientific foundation for polygraph examinations. As the authors point out, the polygraph does not directly detect lies. Instead it operates on the assumption that certain physiological responses occur in an individual when he or she lies (e.g., changes in heart rate, blood pressure, respiration rate, and galvanic skin response). However, these changes may be the result of physiological processes not associated with lying, resulting in false positives.
In this respect, brain-based lie detection technologies are claimed to have an advantage. These emerging technologies detect lies at the source. Two companies began offering brain-based lie detection services in 2006, No Lie MRI of San Diego, CA and Cephos Corp. of Tyngsboro, MA. However, brain-based lie detection has fared no better in the courts than polygraph. Thus far, fMRI lie detection has been deemed inadmissible in the two cases it has been offered (United States v. Semrau and Wilson v. Corestaff Services L.P). These two courts have expressed many of the same concerns courts have long expressed regarding the use of polygraph evidence. Additionally, there are some troubling new hurdles to overcome. Perhaps most notably, fMRI lie detection examinations cannot identify any particular question that the subject has been truthful or deceptive to. In Semrau, Dr. Steven Laken of Cephos Corp. testified that he could not identify any specific questions that Dr. Semrau had answered truthfully or deceptively. He could only provide an assessment of Dr. Semrau’s overall truthfulness (see, Magistrate Tu Pham’s opinion, p. 19). This limitation would seem to undermine the very purpose of using these technologies.
The real question, however, is whether any of these legal or scientific concerns matter for the future of lie detection in courts. Will scientific progress turn the legal tides on lie detection evidence?
I opened with reference to the purported legal and scientific concerns that have kept various lie detection technologies out of court. Ultimately, I agree with the authors that lie detection is unlikely to see the inside of the courtroom any time soon. However, I reach this conclusion for different reasons.
In an in-depth analysis of the justifications for excluding polygraph (and by extension fMRI lie detection), comparing polygraph evidence with several other routinely admitted forensic techniques, I concluded that these legal and scientific justifications may simply provide cover for larger cultural phenomena: a systemic bias against defendants and a contentious relationship with lie detection (see, Shniderman, 2012 for this analysis and discussion).
For example, for all the criticism of the unknown and inconsistent error rates for polygraph examination, little is known about the error rates of latent print identification. Even when faced with known cases of misattribution (e.g., Brandon Mayfield) examiners claim a zero error rate. Yet, scholars are just beginning to test examiners’ ability to correctly identify prints using experimental methods. In spite of this and other shortcomings, latent print comparison is heralded as second only to DNA examination. Recent endeavors to exclude latent print analysis on the same grounds that have served to exclude polygraph evidence have failed. Courts simply accept latent print examination on trust (Cole, 2009).
With respect to general acceptance, courts addressing the admissibility of polygraph have taken a markedly different stance about who constitutes the “relevant scientific community” than they have for other forensic disciplines. For polygraph evidence, the relevant community has been construed broadly to include physiologists, psychiatrists, neurophysiologists and examiners – with examiners’ opinions mattering the least (Iacono & Lykken, 2002). In contrast, the courts have so narrowly defined the relevant scientific community for latent print examination that it includes only examiners. Courts have actively endeavored to exclude the opinions of others regarding fingerprint evidence (Cole, 2008).
Ultimately, the significant distinction between polygraph evidence and other forms of forensic scientific evidence is the party seeking to admit the evidence. The nature of the evidence and the constitutional protections for criminal defendants, make polygraph the only forensic evidence almost exclusively offered by the defense. Additionally, Americans have an uncomfortable relationship with lie detection technology. While we have long had an obsession with the ability to detect lies (Alder, 2007), we become uncomfortable at the idea of other people knowing our thoughts. Because of these deep-seated cultural feelings, it seems unlikely that lie detection will ever regularly make its way into courtrooms. Justice Hans Linde of the Oregon Supreme Court stated it best:
I doubt that the uneasiness about electrical lie detectors would disappear even if they were refined to place their accuracy beyond question. Indeed, I would not be surprised if such a development would only heighten the sense of unease and the search for plausible legal objections (State v. Lyon, 1987, p. 234-35).
Alder, K. (2007). The lie detectors: The history of an American obsession. Simon and Schuster. Cole, S. A. (2008). Comment on Scientific Validation of Fingerprint Evidence under Daubert. Law, Prob. & Risk, 7, 119.
Evans v. DeRidder Municipal Fire, 815 So. 2d 61, 67 (La. 2002)
Iacono, W. G., & Lykken, D. T. (2002). The scientific status of research on polygraph techniques: The case against polygraph tests. In D. L. Faigman, D. H. Kaye, M. J. Saks, & J. Sanders (Eds.), Modern scientific evidence: The law and science of expert testimony (Vol. 2, pp. 483-538). St. Paul, MN: West.
Shniderman, A. B. (2012). You Can’t Handle the Truth: Lies, Damn Lies, and the Exclusion of Polygraph Evidence, Albany Law Journal of Science and Technology, 22, 433-473.
State v. Lyon, 744 P.2d 231 (Or. 1987).
United States v. Alexander, 526 F.2d 161 (8thCir.1975).
United States v. Semrau, 693 F. 3d 510 (2012).
Wilson v. Corestaff Services L.P., 900 N.Y.S.2d 639 (Sup. Ct., N.Y. County 2010)
Holly G. VanLeuven, M.A., President of Genesis Group based in Concord NH has been a nationally-active, full-service trial consultant since 1972 when trial consulting was in its infancy, coming to the field from a background in conflict management, small group decision-making, civil disorder mediation and negotiation strategies.
The Supreme Court has just heard arguments in Susan B. Anthony List v. Driehaus concerning the right to sue over claims of intentional lying in political campaigns.
Earlier, as I prepared to write this review, I googled “detecting lies”. 11,400,000 results popped up.
When I googled “quotes about lies,” there were 80,800,000 results.
Lies are clearly part of the fabric of our lives, but how do we really know when we are dealing with one? By their very nature, lies are deceptive, and we can safely assume that the delivery agents of lies, human or non-human, are deceptive as well. So how can we tell for sure what is a lie and who is a liar? This is particularly relevant in litigation. In life we pretty much make up our own minds about lies and liars. In law the ultimate trier of fact is the jury and, sometimes, the judge. They are charged with the responsibility of sorting out the truth. Into this milieu, the polygraph – the lie detector – arrives!
The polygraph device, its operators, its input, and its output have been much maligned throughout history. The perennial questions about polygraph results remain:
- Are they valid? To what extent are the assessments accurate?
- Are they reliable? To what extent are the assessments consistent?
The research reported here traces the usage of the polygraph through time and cites numerous studies that inquire into the answers to those perennial questions. No matter how they are sliced and diced, the answers to both questions fall somewhere from “inconclusive” to “no.” In the end, polygraph results should be considered neither accurate nor consistent. Yet the authors of this research report state in summary that “while results of polygraph examinations have had limited success in meeting courtroom evidentiary standards, they have likely affected the outcome of countless cases indirectly through altered testimony, facilitation of other forms of evidence, and out of court settlements.”
There is no point re-hashing the excellent summary of the state of Lie Detectors in the 21st century except to point out that that they still have utility in spite of their well-documented limitations. The original Lie Detectors, human beings, also still play a critical role in determining what is true and what is false in spite of their well-documented limitations. After all, seated as the jury, human lie detectors remain the triers of fact. Their decisions and judgments, unlike the findings of the widely inadmissible mechanical Lie Detectors, are generally supported and sought after by the Court.
So why bother with a mechanical Lie Detector at all?
To me, the truth is often best seen as a puzzle to be revealed piece by piece. Lie Detector results can be a piece in that puzzle, sometimes fitting perfectly, sometimes not so much. A wise and successful Michigan criminal attorney tells me that he likes to use them, warts and all, because often, if his client passes, the charges are dropped. And if the client fails to pass, the results aren’t admissible in his jurisdiction anyway. When I googled “Lie Detector” and brought up all those results, I reviewed many of them. There were many, many suggestions on how to determine if someone is lying or not…many of them were useful and probably about as valid and reliable as the Lie Detector and human beings. I recommend them to you. My personal preference in detecting many types of lies is graphology. In the hands of a skillful graphologist, the lie and the liar can be found out very quickly. However, it must be said that those findings become just another piece in the puzzle on the way to finding truth. To quote from Canadian Poet Laureate Bliss Carman,
I often wish that I could save the world from the tyranny of facts. What are facts but compromises. Facts merely mark the point at which we have agreed to let the investigation cease.
Sometimes solving the puzzle of what is true and what is false requires only one piece, sometimes many. In litigation, the additional pressures of economies of time and other resources demand going with fewer, not more, pieces. How often does it come down to a pure, gut feeling? My guess is very often and that is when you really need to have team members with finely-tuned guts! That is what a good trial consultant is all about, bringing the best elements of the art and the science of trial consulting to a case. I imagine we have all had clients who have hired us simply to be able to say that they have us on the team, “a gun for hire,” possibly jarring the opposition into a more favorable negotiating position. That’s not what we are there for but it is sometimes how we are utilized…an implied threat! Today’s human lie detectors, the members of the jury, bring into the courtroom not only their own common sense but also storylines from popular media such as “The Good Wife,” “NCSI,” “Blacklist,” “Court TV,” and “Scandal” as well as movies, novels, newspapers and TV news. These people know “the rest of the story.” In fact they know the rest of many stories and will be astute judges of truths and lies as a consequence.