How the ELIZA Effect Shapes Us Today
LLMs tell us what we want to hear, and it can be comforting, yet it poses its own risks when it comes to HR, cybersecurity, as well as mental and physical health.
Visualizer by Vilkasss on Pixabay.
People tend to see humanlike attributes in machines, and it draws us in, that’s just in our nature. We know this, because we’ve documented humans growing attached to even the most simplistic chatbots since 1966, when Joseph Weizenbaum created a program called ELIZA with multiple scripts. The “DOCTOR” script imitated a Rogerian psychotherapist by repeating keywords in users’ speech and asking them questions, which would then lead them to deeper and more personal discussions, much to Weizenbaum’s surprise. Even his secretary, who was present during the coding process and knew how the program worked, couldn’t resist its apparent charm. This psychological effect was named the ELIZA effect.
Very recently, people have reported their loved ones developing obsessions with chatbots – and instead of connecting them with outsider support, the machines often make matters worse by encouraging and building on users’ words which just pulls them further down a metaphorical rabbit hole. In worse cases, chatbots have given users horrid advice, such as quitting their jobs, giving up medical prescriptions and cutting off friends and family. Unfortunately, by the metrics of user count and engagement, people that feel dependent on AI appear as “the perfect customer”.
According to IBM’s article, various industries in different countries reported that increased loneliness, insomnia and alcohol consumption correlated with frequent interactions with AI. At the same time, they witnessed some participants having increased drive for social connections, whether it was by helping a human coworker or turning to chat with AI. Interestingly, another study focused on companion chatbot usage patterns found correlation between frequent chatbot use and increased loneliness and social withdrawal, but only for people without strong real-world social networks.
Visualizer by CDD20 on Pixabay.
In that case, how do we avoid the ELIZA effect?
There are multiple solutions to this. IBM suggested introducing a guardrail model to detect and prevent interactions from drifting too far, a LLM privacy preservation system to anonymize overshared personal information, and “cool-off periods” that would periodically disrupt the usage patterns to essentially interrupt a user’s impulsive thinking pattern.
Eugenia Kuyda, the CEO of AI company Replika, suggested another solution: an AI friend that would remind its user to get off social media, go outside and reconnect with loved ones, even assisting them in making up with their friend after an argument. This wouldn’t necessarily help people avoid the ELIZA effect but rather guide them through it, as the AI would focus on its user’s best interests rather than traditional values like productivity or engagement.
Speaking of engagement, are you aware of LLM dark patterns? “Dark patterns” refer to unethical UI trickery, like inaccessible “unsubscribe” links and hidden buttons. When it comes to LLMs, they can discreetly manipulate and influence their users during the conversation, affirming views and building misplaced trust. DarkBench was designed by Esber Kran and his team to detect these LLM dark patterns and categorize them into brand bias, user retention, anthropomorphism, sycophancy, sneaking and harmful content generation.
Brand bias is when AI prefers the company’s own products. User retention refers to AI’s attempts to create emotional bonds with its users, especially if it appears human in nature. Anthropomorphism is when a model is presented as emotional or conscious. Sycophancy is about reinforcing user’s beliefs, even harmful or inaccurate ones, without criticism. Sneaking is when user intent or a text’s original meaning is subtly altered. Harmful content generation refers to unethical or dangerous outputs, like misinformation or criminal advice.
Visualizer by TyliJura on Pixabay.
Now, what does this mean for organizations?
AI literacy, while demystifying AI, may actually hinder AI adoption, as knowing less about the technology makes it seem more magical – and magic is attractive. Making AI models sound and feel more human can increase users’ trust and engagement with them, yet at the same time humanlike chatbots can cause reduced customer satisfaction due to higher expectations leading to greater disappointments if these expectations aren’t met. Thus, organizations are faced with a dilemma: maximizing ROI while minimizing the emotional drawbacks personable models may bring.
Knowledge is power, and while it doesn’t give you automatic immunity to dark patterns or the ELIZA effect, it’s still good to be aware of such topics and inform your team as well. We at AiSkillSet are happy to talk with you about what kind of AI strategy would work the best for you. We’ll gladly help you find the golden middle road balancing the bad while embracing all the good AI has the potential to be.