In The Moral Landscape: How Science Can Determine Human Values, Sam Harris raises the possibility of an accurate lie detector based on neural imaging: a machine that could accurately determine whether a statement someone makes accurately reflects their belief on the matter at hand.
Harris discusses the social consequences of the existence of such a machine, and generally thinks they would be positive. They would, for instance, reduce the number of false convictions and false acquittals in the criminal justice system.
Personally, I think the social and cultural effects of such a machine would be extremely widespread, if there was general confidence in its accuracy. Inevitably, there would be calls to test how genuinely all sorts of people feel about things. Does this proposed Catholic bishop really believe in key elements of Catholic doctrine? Does this politician honestly intend to fulfill a particular promise? Does the man who just proposed marriage to a woman really think she is the most beautiful woman he has seen? Does he really want children? Does he really intend to stay with her into old age? Has be been entirely faithful during their courtship? Would he have taken the opportunity to sleep with someone else, if it had arisen?
Of course, the machine could then be turned on the other partner.
If it ever became culturally acceptable to subject people to impartial evaluation on these sorts of questions, it would have countless direct and indirect effects. For one thing, I think it would make hapless pawns more important. Rather than having cynical mob lawyers who know all about the family’s murders but exploit the legal system in every possible way regardless, there would need to be a lot more ignorant people defending important individuals and institutions. Similarly, corporate CEOs would no longer be able to hedge strategically to avoid liability, which could significantly affect the safety and availability of many products in the long-term. For instance, people would have a lot more trouble selling placebos as medicine.
To a large extent, I think society is based around the general acceptance of various kinds of lies. If the people who ran or represented the world’s governments, churches, and corporations had to be scrupulously truthful at all times, the public understanding of how the world operates would change radically. I don’t think this is because people are terribly ignorant about reality. More it is because there are many deceptions which we are comfortable with accepting. For instance, that we are already doing an adequate amount to help those who are starving around the world; that our governments do not commit war crimes or contribute to genocides; that our meat doesn’t get produced in exceptionally cruel ways; and so forth.
There would also be small-scale consequences. To me, it seems that politeness is fundamentally bound up with deception. At the very least, ‘being polite’ requires withholding genuinely held beliefs that would be offensive to other parties in a conversation. At most, it requires actively lying to them. The existence of an effective and credible lie detector would strip people of the ability to be polite. It is possible that would be liberating – allowing people to really express themselves without fear, and granting a better perspective into the real thoughts of others. It is also possible it would be devastating: breaking up businesses, families, and long-standing marriages when people learn things that they simply cannot handle – especially with the full knowledge that they are true (or as much confidence as the accuracy of the equipment allows).
All this relates to some of the issues raised by the film The Invention of Lying, which I commented on before. To have any hope of surviving in this world, we need to be able to accept the possibility that a person could be wrong about something. When someone says that the elevator has arrived, we check before stepping through the open doors into the elevator shaft. Even a perfect lie detector would do nothing to protect us from honestly mistaken beliefs. What it would probably do is have profound social and cultural effects, as a huge number of people found themselves in a position where they either had to submit to the test or foster the widespread view that they aren’t genuine in the claims they are making.
Even if such machines existed, they would probably be used rarely.
How often are polygraph machines used now? That may be partly caused by how they lack accuracy, but it is also partly because people rightly see them as a big invasion.
One place they would probably be used is confirmation hearings for potential American Supreme Court justices.
The likely consequence of that is that nobody could be appointed to the bench, since every possible candidate believes something that would be unacceptable to either Democrats or Republicans.
It’s fair enough that people wouldn’t use such a device in ordinary conversations. I do wonder – though – what effect the simple possibility of testing someone’s honesty with a high degree of certainty would have.
It would be kind of thrilling if it stripped away much of the evasion and soft deception of life, and made the whole world a more candid place.
Harris here makes a mistake because he doesn’t know what belief if – which is unsurprising; it’s a quite poorly understood phenomenon. Belief is not as simply as “I believe X”, and it is not reducible to those beliefs about which I have no or fewest doubts. Some beliefs about which I am more doubtful I am also more confidence. This is because truth is fundamentally not subject to non-contradiction – the way words or systems accord with the world is holistic, and we sense this intuitively. This is obvious when talking about systems as a whole (i.e. Newtonian mechanics is false, but it has some truth to it; repeat for any other system), but just as true when talking about particular “facts” or “values” which are factual/valuable given a system or over-arching goal.
Zizek has a good and very readable book on belief, which makes related but not identical arguments. It is named “On Belief”, easily available on google books or other more illegal online pdf sites.
Regardless of how complicated the process of belief is, it might be possible to reliable detect the processes of deception in the brain. It seems plausible that the act of crafting an untrue statement has neurological hallmarks that might be predictably identified using techniques like functional magnetic resonance imaging (fMRI).
I do not think polygraphs are widely used. I know of only one polygraph operation in the Vancouver area : A couple performing this task for hire. As they are generally available to do it on short notice, this would appear to suggest they are not overly busy.
I have used it about 10 times. I have found its best use when there are two people that disagree on fact or how something occurred. I ask my client to undergo a polygraph and when the results support that person’s view I present it to the other side. I think it is effective for three reasons: its ostensible objectivity, it is rarely used and therefore stands out and the other side has yet to take up the suggestion of having their client do it (or at least to my knowledge). and part of the reason.
On a few occasions I have also used it to suggest another side take a polygraph as a way to resolve a factual dispute when I am confident that view is not correct. I do not recall that invitation being accepted.
fMRI deception detection will only work insofar as we understand what belief and deception actually are, phenomenologically. If we can not distill the different experiential categories, we will not have the right guiding precepts for research into the physical categories, and we will remain in endless mud. For instance, if “deception” actually means several different and potentially unrelated metal activities, then it might actually be impossible to develop fMRI deception detection without first knowing what deceptions are.
Deception, for instance, seems to be always intentional – that means there is always an object of deception (someone/something you are trying to deceive). That might not be true, but if it is true then self-deception might depend on some form of self-fracturation or perhaps initial non-unity of personal identity (or rather, non-identity). Moreover, we are not actually unified selves across time, so our basic notions of personal identity require a base-level of personal deception. Is that deception the same kind of deception as self-deception about religious beliefs? And will that form of deception bear the same fMRI signatures as deceiving others? And, are there not different ways of deceiving others – for instance isn’t there a difference between deceiving someone about values, and deceiving them about facts? And, might there be a difference between deceiving others about your opinions of facts, compared to your values?
Insofar as all these different activities are different processes, we can’t assume that this analysis can be ignored or bypassed by any experimental psychology/neurology.
There is no guarantee that the sort of device Harris envisions is actually possible.
fMRI looks at blood flow, so it isn’t a terribly fine-grained tool for looking at what small numbers of neurons are doing.
Still, I think the existence of a lie detecting machine that is generally believed to be accurate could have a considerable effect on quite a number of institutions. Organized crime groups would have a much easier time detecting informants – for instance.
“that is generally believed to be accurate”
This would, in the end, not be a fact but a value. We already have lie detecting machines, but due to their low accuracy we do not value them as “accurate enough”. The point at which an improved accuracy becomes “accurate enough” is a value judgement about the relative value of certainty with respect to other goals (such as justice, effectivity).
And, if Harris wants to reduce this value to a ‘fact’, then the fact is a fact about human well-being with respect to accepting this value. And not actual human well being (unknowable), but perceived human well being from the perspective of the one deciding whether or not to value the relative certainty of the machine as “good enough”.
There is nothing new, nothing “scientific” about this approach that is lacking in the way we cope with existing means of evaluating the relative accuracy and value of different standards. It simply means “cope as best we can with what’s available”.
When a powerful new tool becomes available, it changes situations. In the event that someone developed an accurate and convenient device for detecting deception, I think it would become generally accepted as accurate as an increasing quantity of experimental evidence accumulated.
My point is, this is not in any way an interesting insight. It’s trivially true, and it’s not true in a way that revolutionizes the relationship between value and science. In fact, it isn’t logically any different from the invention of the currently existent lie detecting tests.
Has anyone else had experience utilizing polygraphs (lie detectors)?
From what I have read, they aren’t very credible. Even if they aren’t accurate, they may have some value. For instance, knowing they will need to take a polygraph might deter some people from applying to jobs involving very sensitive information.
The times that I have used polygraphs have been for very specific factual disputes between two opposing parties. Examples are :
1. Did you tell the insurance agent that you were a pizza deliverer
2. Did you run across the intersection
3. Were you aware that there was a rodent infestation when you sold the house.
I have been told by the person who administers the polygraph that the polygraph is not valuable to identify a persons values or belief systems, or when the set of facts is too complicated.
That whole section of the linked Wikipedia article is relevant. In particular, the danger that people subjected to a polygraph could change the result by artificially augmenting responses to control questions.
That said, agreeing to take such a test can be interpreted as a signal that the subject is being truthful.
My use of the polygraph has generally been to ask my own clients to take the test on a factual question. I interpret their willingness to do so as a factor that supports them being truthful. After my client undergoes and passes the test, I suggested to the opposing side that their client to do so. I have yet to have that offer accepted. That further solidifies my (and I expect the opposing sides’) belief in the truthful of my client.
Imagine you were an attorney on the other side. You believe that your client is telling the truth, and the other side suggests they take a polygraph. You do some research and see that the machines are generally regarded as error-prone and not credible.
As such, it would seem risky to advise your client to take one. Their nervousness might be misinterpreted as deception, weakening the strength of your case.
As such, reluctance to take such a test might be well-justified caution, rather than a signal of a lack of confidence in someone’s truthfulness.
As such, it would seem risky to advise your client to take one. Their nervousness might be misinterpreted as deception, weakening the strength of your case.
But it’s clever if you get your client to take one first. If he passes it, you can say “Hey, look. My client took this polygraph and aced. Your turn!” If he fails, you quietly sweep it under the table.
Also, if the people running the machine know which side is paying them, they could be driven – consciously or not – to interpret the results of the test in a way that favours their client.
It would be better if they were hired by a third party or did not know if they were being paid by one side or the other in the legal dispute.
But even in daily life, without the particular pressures of politics, people find it hard to spot liars. Tim Levine of the University of Alabama, Birmingham, has spent decades running tests that allow participants (apparently unobserved) to cheat. He then asks them on camera if they have played fair. He asks others to look at the recordings and decide who is being forthright about cheating and who is covering it up. In 300 such tests people got it wrong about half of the time, no better than a random coin toss. Few people can detect a liar. Even those whose job is to conduct interviews to dig out hidden truths, such as police officers or intelligence agents, are no better than ordinary folk.
Evolution may explain credulity. In a forthcoming book, “Duped”, Mr Levine argues that evolutionary pressures have adapted people to assume that others are telling the truth. Most communication by most people is truthful most of the time, so a presumption of honesty is usually justified and is necessary to keep communication efficient. If you checked everything you were told from first principles, it would become impossible to talk. Humans are hard-wired to assume that what they hear is true—and therefore, says Mr Levine, “hard-wired to be duped”.
‘Our notion of privacy will be useless’: what happens if technology learns to read our minds?
https://www.theguardian.com/technology/2021/nov/07/our-notion-of-privacy-will-be-useless-what-happens-if-technology-learns-to-read-our-minds