Lying is easy.
This statement is not just based on anecdotal evidence. Bond and DePaulo (2006), Vrij (2008) and others document that the ability of a listener to evaluate whether a statement is a lie is surprisingly poor. It is poor even for people who believe that they are good lie detectors (e.g., those who say “I look them in the eye”…) and it is also poor for people who are trained to be professional lie detectors, like police interrogators or judges.
The word surprisingly, in “surprisingly poor”, here refers not only to the surprise of us outside observers, or to the surprise of the listener who does not know that she can be deceived. It also refers to the talker. Only few people know how well they can lie, how easy it is to get away with it (Gilovich et al., 1998). But some people do know about it. And it is of course very tempting to bend the truth.
At the time of writing this blog entry, the work of two prominent behavioral scientists – Dan Ariely and Francesca Gino – is being re-examined by the academic community and even by their co-authors, as Ariely and Gino are both being publicly accused of having made false statements about their data with the intention to further their own careers (Scheiber, 2023, Lewis-Kraus, 2023). They may have made such false statements over quite a long period, during which their co-authors, or anyone else really, were 100% unsuspecting.
The question arises why we are so easy deceived. Grabova et al. (2023) investigate the role of the listener’s second-order beliefs: does the listener think about the thoughts of the talker? More precisely, they study a laboratory situation where a talker’s statement may be a lie, and where the listener reports whether she believes that the talker believes that he can mislead her. Different participants of the experiment who act in the role of the listener have different second-order beliefs, and the nature of best response behavior – theoretically – leads to an interesting correlation: the participants who believe that the talker believes that he can mislead them should be less likely, not more likely, to follow the statement.
The data, however, show the opposite pattern, with a significantly positive correlation between second-order beliefs and one’s own credulity. This indicates that in their attempt to detect a lie, the listeners do not think the incentives through. (In contrast, the talkers report second order beliefs that confirm the theoretical predictions.) The result puts doubt on a sophisticated view of listening behavior. At least in the unfamiliar, one-off experience in the communication game of Grabova et al. (2023), the listeners appear to not even engage in two steps of best response reasoning. It is also noteworthy that the experiment makes it very easy to understand the logic of the game. The game is very stylized and makes it blatantly obvious what sending/receiving of statements means, and what the corresponding beliefs mean. Grabova et al. (2023)’s experiment here follows a beautifully simplified design by Peeters et al., 2015. The simplicity makes it even more striking that listeners often do not think through the talker’s incentives, and the result may be a possible explanation for lying detection failures also in other, more complex environments.
What does this imply about Ariely’s and Gino’s cases of alleged fabrications? Clearly, it does not imply anything about their guilt. But it does show, once again, how easy it is to be deceived. The press reports document that Ariely and Gino were sometimes less than forthcoming about the details of their data generations. This alone is nothing unusual in the behavioral sciences where the researcher is often the proprietor of the data. However, it does require a large amount of trust in the integrity of the data generation process. The “listeners” of Ariely/Gino – here, colleagues in the academic community, and the interested public – behave as if they simply assumed that the researchers’ goal was to improve knowledge. If finding the truth is the name of the game, then there is no reason to distrust. The listener believes that the talker has no reason to send misleading messages, and this belief is, by and large, accurate. Given such a set of beliefs and objectives, it would not make much sense for the talker to fabricate a result. But the objectives of the talker, and their beliefs about the listener’s reaction, may be different: they may expect to rise in the public’s esteem. They may want to impress. They may seek fame and tangible rewards. If they expect to receive these rewards from producing a particularly appealing / cute / provocative / simple result, then they may have every reason to fabricate it. This has consequences for second-order beliefs: the listener should understand that the most relevant belief of the talker may be about receiving the reward; she should therefore ask whether the circumstances of the talker are such that he is tempted to fabricate results.
This also raises uncomfortable questions about what is the optimal response of the academic community to such a loss in trust. We should certainly show a reaction to this loss. We should make sure, and point out to our listeners, that our own reward system does not give us strong incentives to lie, but that it puts its first and foremost emphasis on finding the truth. It is a never-ending and cumbersome fight for the listeners’ trust; but a necessary one. In other words: Lying is, indeed, easy. But we need to be able to prove that we aren’t liars.
This text is jointly published by "Researching Misunderstandings" and BSE Insights.
References:
Bond, Charles F. Jr., and Bella M. DePaulo (2006), Accuracy of Deception Judgements, Personality and Social Psychology Review 10(3), 214-234.
Gilovich, Thomas, Kenneth Savitsky and Victoria Husted Medvec (1998), The Illusion of Transparency: Biased Assessments of Others’ Ability to Read One’s Emotional States, Journal of Personality and Social Psychology 72(2), 332-346.
Grabova, Iuliia, Hedda Nielsen and Georg Weizsäcker (2023), Attempting to detect a lie: Do we think it through?, CRC TRR 190 Discussion Paper No. 477.
Lewis-Kraus, Gideon, 2023. They Studied Dishonesty. Was Their Work a Lie? In: The New Yorker, 09 October, 2023.
Peeters, Ronald, Marc Vorsatz, and Marcus Walzl (2015), Beliefs and truth-telling: A laboratory experiment. Journal of Economic Behavior and Organization 113, 1-12.
Scheiber, Noam, 2023. The Harvard Professor and the Bloggers. In: The New York Times, 30 September, 2023.
Vrij, Aldert (2008), Detecting Lies and Deceipt: Pitfalls and Opportunities. John Wiley & Sons.