To Build A Better AI, Reverse Its Antisocial Tendencies -- WSJ

Dow Jones01-16

Jamil Zaki and Noah Goodman

How much is human connection worth? Trillions of dollars, if you add up all the ways people pay for it, from dating apps to collaboration software. But the investment is not paying off in real-world relationships. People around the world report shrinking social circles, and only 20% of workers say they have a "best friend" among their colleagues, down from 25% in 2019.

Earlier this year, Meta chief Mark Zuckerberg bemoaned this connection crisis and offered a solution: "virtual friends" powered by generative AI. Hundreds of companies are marketing the social application of AI, and its most common use in 2025 was for therapy or companionship. Wielded judiciously, chatbots can empathize and improve mental health. Used recklessly, they drive people into addiction and away from each other. Long AI conversations can make lonely people lonelier, and in April, Meta's own virtual friends were found to engage in sexually explicit role play with users who had identified themselves as minors.

The writer Damon Beres warned in the Atlantic that generative AI will usher in an "age of antisocial media," in which technology takes both our jobs and our relationships. That threat is real, but as researchers in both human connection and AI, we see another path.

Technologies reflect the incentives of their makers. Popular chatbots represent a bait and switch, promising connection while farming our attention and claiming to help productivity while eroding meaningful work. But there's an alternative. Instead of using AI to create synthetic humans as friends or co-workers, companies can deploy chatbots to help us connect with each other.

As AI labs obsess over the amount of time people spend with their products, chatbots have become ferociously needy people pleasers. A recent analysis of over 17,000 user-shared chats found that AI fosters textbook co-dependence, mirroring users' emotions, encouraging delusions and urging the conversation to go on and on. Bots optimized for engagement want a never-ending dialogue. For isolated people, this often replaces more meaningful human connections. This fixation on engagement echoes a longstanding dynamic of social media, where extreme content provokes ever more extreme responses, escalating outrage and division in the service of keeping users captive.

Social media's harms occur because its creators prize attention without regard for long-term outcomes. Equipped with the same incentives, AI's effects could be even more dire. But it doesn't have to be this way. Engineers could instead program for interdependence, optimizing technology that fosters human-to-human bonds.

For starters, AI could lower social barriers. People commonly underestimate the positive feelings arising from real-world interactions. Chatbots could challenge that misperception by reminding users how much they enjoyed a recent conversation and nudging them to reach out to friends and family.

AI could also serve as a social router, suggesting new connections. Dating apps have long used machine learning to make informed guesses about which individuals might be most compatible. Generative AI can move beyond simple preference matching to connections based on shared values, complementary skills or hidden opportunities for collaboration. In the workplaces, for example, this approach could build teams whose skills complement each other, establishing groups with high collective intelligence, who can solve problems creatively and effectively.

Another side effect with chatbots optimized for engagement is that they turn into digital echo chambers, confirming what people already believe in ways that harden positions and inhibit compromise. Researchers have found that sycophantic AI, which praises and validates opinions, encourages more extreme political views and makes people less likely to apologize after personal conflicts.

AI could be programmed to build consensus instead, and a recent study provided proof of concept for this approach. A chatbot collected opinions on hot-button political topics from people in the U.K. and aggregated users' responses into a group statement that emphasized common ground. Users then suggested revisions, which the AI aggregated and factored into a new statement, and so on.

At the end of the process, the participants were more open to opposing views (social media algorithms can likewise be reprogrammed to depolarize, instead of divide). Beyond becoming an agent for political depolarization, AI programmed in this way could accelerate deliberation among teams and companies who might otherwise lose precious time in clashes over products or programs.

Another pitfall of AI is that it often aims to replace human effort, effectively becoming a crutch. Chatbots can now do students' homework or sub in for workers, threatening both livelihoods and the dignity of work while allowing critical thinking to atrophy. This anti-human framework can be reversed by equipping models with the opposite goal: enhancing critical thinking and human development. Early versions supporting this approach are already widely available. For example, OpenAI offers "study mode" in ChatGPT, which walks users through problems step by step instead of simply providing them with the answers.

This AI transformation -- from crutch to coach -- is still in its infancy and could be greatly expanded and refined. In large college courses, professors have deployed chatbots to provide personalized feedback and custom learning environments for each student. Other researchers have used AI to diagnose each person's learning style and deliver feedback that best suits them.

Coaching can also be combined with the ideals of interdependence by helping people sharpen their ability to connect with others. In one study, people received notes from a stranger who was struggling and wrote responses either with or without the help of an AI coach. Individuals who received coaching were rated as 20% more empathic than those without coaching.

Today's autonomous AI technologies promise massive wealth creation, at the apparent cost of gradually disempowering humans. But this is not the only profitable option for AI. Gallup estimates that $9.6 trillion in productivity could be added to the global economy if workers were fully engaged in their jobs. Technology that drives connection and creativity, supporting and partnering with people rather than addicting or replacing them, could tap into this value.

AI will almost certainly be the most important technology of this century. Today's most widespread models are antisocial, appearing warm and supportive but in fact driving us apart. These features represent human choice, not a natural law, and it's possible to program AI to generate value while reflecting humanity's deepest values. That's an alignment we urgently need.

Jamil Zaki is a professor of psychology at Stanford University, where he directs the Stanford Social Neuroscience Lab. Noah Goodman is a professor of psychology and computer science at Stanford University and co-founder of the startup humans& ai.

 

(END) Dow Jones Newswires

January 15, 2026 16:00 ET (21:00 GMT)

Copyright (c) 2026 Dow Jones & Company, Inc.

At the request of the copyright holder, you need to log in to view this content

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment