The Problem with Humanizing Generative AI Products

The Problem with Humanizing Generative AI Products
Image Credit: Meta

Humans, in their infinite wisdom, are determined to make artificial intelligence in their own image. Some AI companies have even grander ambitions - hoping to create AI in the image of God.

This drive to humanize AI, what researchers Scorici, Schultz, and Seele term "humanwashing," can be defined as "the deceptive use of AIEMs [AI-enabled machines], aimed at intentionally or unintentionally misleading organizational stakeholders and the broader public about the true capabilities that AIEMs possess."

Our desire to humanize AI is understandable. We're narcissistic creatures, constantly drawn to our own reflections. We create games like The Sims to simulate the monotony of human life. We spend our days interacting with other humans, seeing ourselves mirrored in cameras, mirrors, and the faces of others.

So it seems logical to wrap AI and large language models in a cloak of humanness. The pitch is seductive: A realistic virtual woman responding to our smartphone queries. Human-like robots solving banking problems. Synthetic tutors educating our children. AI nurses triaging medical concerns. A coding buddy to boost our productivity.

These aren't far-fetched ideas. I hear them proposed daily. I've seen the designs, proofs of concept, and product demos. Just recently, Mark Zuckerberg demonstrated an unsettling chat with an AI clone of a human creator - while the actual person stood by watching. As TechCrunch reported, "Mark Zuckerberg invited creator Don Allen Stevenson III to join him onstage. The Meta CEO proceeded to pick up a phone and carry on a conversation with an AI-generated version of Stevenson as the genuine article stood between the exec and an image of himself on the big screen."

The problem is there's so much we don't understand about the implications of this humanization. Yes, research is happening. Academia and industry are exploring the unknowns. But we need more cross-pollination of ideas. We need philosophers collaborating with technologists, computer scientists partnering with psychologists.

Because the risks of humanwashing are significant. By portraying AI as more human-like and capable than it truly is, we create unrealistic expectations. We blur lines of responsibility. We risk eroding public trust when these human-like AIs inevitably fall short. As Scorici, Schultz, and Seele note, "Asymmetry of information is at the core of this phenomenon since only one party—the corporation—has the power of complete awareness of the state of reality."

This trust erosion is particularly concerning given that people already tend to have "extreme performance requirements for implicit moral machines, because they expect a substantial increase over baseline human performance, while overestimating this baseline human performance" (see Bonnefon et al., 2024). When machines fail to meet these inflated expectations, there is backlash.

Most concerningly, humanized AI could become a powerful tool for manipulation. When we imbue machines with human qualities, we're more likely to trust them implicitly - even when that trust isn't warranted. As Bonnefon et al. warn, "Behind the marketing veil, corporate interests tap into people's psychological biases when presenting AIEM machines in advertisements and social media campaigns."

This isn't to say we should abandon all efforts to make AI relatable. But we need a more nuanced, ethical approach. We need radical transparency about AI's true nature and limitations. We need realistic portrayals that don't mislead about capabilities. And we need robust ethical frameworks developed by diverse teams.

Interestingly, Bonnefon et al. found that in some contexts, "people do not feel especially outraged when machines discriminate, or at least not as outraged as they would feel if humans discriminated." This finding raises important questions about how humanizing AI might inadvertently lower our ethical standards for these systems. As we integrate AI more deeply into our lives and decision-making processes, we must be vigilant in maintaining - and even elevating - our moral expectations.

As AI grows more sophisticated, the demos will only get stranger. We've already seen Zuckerberg chatting with AI clones and celebrity-voiced chatbots. As TechCrunch aptly put it, "It only gets weirder from here."

We should resist the urge to slap a human mask on every AI product. Instead, let's focus on developing technology that actually complements humanity rather than poorly imitating it. The future of AI should be built on honesty, not illusion.

Subscribe to Andy Busam

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe