Jeffrey Hancock, Assistant Professor, Department of Communication, and Faculty of Jeffrey Hancock, recently received a $680,000.00 grant from the National Science Foundation titled “The Dynamics of Digital Deception in Computer Mediated Environments.” An ISS small grant allowed Professor Hancock to conduct two initial studies listed below that were included in the NSF proposal.
Overview of the Project: Hancock’s Co-PIs come from computer science (Claire Cardie) and linguistics (Mats Rooth), drawing on communication, psychology, natural language processing and computational linguistics to examine how humans adapt their deceptive practices to new communication and information environments. Deception is a significant and pervasive social phenomenon that can touch on all aspects of human life. On average, people tell one to two lies a day, and these lies range from the trivial to the very serious, including deception between friends and family, in the workplace, and in security and intelligence contexts. At the same time, information and communication technologies have pervaded almost all aspects of human communication, from everyday technologies that support interpersonal interactions, such as email and instant messaging, to more sophisticated systems that support organization-level interactions.
Study 1: How do computer-mediated environments affect the production and practices of deception? This research examines how different technologies change how often we lie, the types of lies, and the targets of those lies. Some of the studies have examined whether we lie more on the phone or in email. Others have looked at how lying takes place in online dating profiles. See Hancock, J.T., Thom- Santelli, J., & Ritchie, T. (2004). Deception and design: The impact of communication technologies on lying behavior. Proceedings, Conference on Computer Human Interaction, 6, 130-136.
Study 2: How is our ability to detect deception affected by computer-mediated environments? Humans are notoriously bad at detecting deception, and typically perform at chance in face-to-face contexts. Are we even worse when trying to detect deception online, such as instant messaging or email? Our initial research suggests that some of the factors that may matter are the motivations of the liar and the suspicion level of the target of the lie. See Woodworth, M., Hancock, J.T., & Goorha, S. (2005). The motivational enhancement effect: Implications for our chosen modes of communication in the 21 st century. Proceedings, Hawaii International Conference on System Sciences.
Study 3: Can advanced computational and natural language processing techniques be used to analyze and identify deceptive and truthful messages? Research so far suggests that people may use different patterns of words when lying than when telling the truth. And, surprisingly, the targets of lies may also change the way they talk, even when they don’t know that they are being lied to! See Hancock, J.T., Curry, L., Goorha, S., & Woodworth, M.T. (2004). Lies in Conversation: An Examination of Deception Using Automated Linguistic Analysis. Proceedings, Annual Conference of the Cognitive Science Society, 26, 534-540. Mahwah, NJ: LEA.
By examining deception in mediated environments and building computer-based tools for the detection of deceptive messages, this research will develop new approaches that will improve our ability to detect digital forms of deception.