Here Come the Anti-Woke AIs -- WSJ

Dow Jones04-20

By Christopher Mims

As artificial intelligence becomes more powerful by the day -- Meta Platforms just released its latest model -- an important question grows more pressing: Whose values should it embody?

On one end of a spectrum of debate that maps roughly to America's contentious politics are companies like OpenAI, Microsoft and Google. For a host of reasons both reputational and legal, these tech giants are carefully tuning their AIs to avoid answering questions on sensitive topics, such as how to make drugs, or who is the best candidate for president in 2024. When these systems do answer questions about contentious issues, they tend to give the answers least likely to offend users -- or most of them, anyway.

Such fine-tuning of today's most powerful AI models has led to a number of controversies, and accusations that they are biased. The most recent and memorable: in February, Google was forced to shut down its AI's ability to generate images of people, after an outcry over how that system handles race in historical images.

To counterbalance what some believe are the biases of consumer-facing AIs from big tech companies, a grassroots effort to create AIs with few or no guardrails is under way. The goal: AIs that reflect anyone's values, even ones the creators of these AIs might disagree with.

A key enabler of these efforts are companies that train and release open-source AIs. These include Mistral, Alibaba -- and Meta. Each model seems to have been built with a different philosophy in mind. Some of Mistral's have had relatively little fine-tuning. And all open-source models can have their fine-tuning undone, a process that's been demonstrated with models from Meta.

With a steady drumbeat of releases of ever more powerful AIs anticipated -- including GPT-5 from OpenAI, and Meta's new Llama 3, which will also be available in all of the company's core products -- we may come much closer to AIs that are able not only to act on our behalf, but also to do things their makers could never have anticipated.

"That's where this question starts to be really important, because the models could go off and do things, and if they don't have guardrails, it's potentially more problematic," says John Nay, a fellow at Stanford's CodeX center for legal informatics. "We are potentially at a precipice and we don't really know it."

Vaccine-skeptical AI assistants

John Arrow and Tarun Nimmagadda are co-founders of Austin, Texas-based FreedomGPT, which started out as a company that offered both a cloud-based and a downloadable AI that had no filters on its output. That proved to be a challenging business model, says Arrow, because people kept getting FreedomGPT to say offensive things, then complaining to the companies hosting it, and getting the service booted.

"Hosting companies have literally canceled us without warning," says Nimmagadda. "We are on borrowed time," adds Arrow. (The two often finish each other's sentences.)

To avoid getting shut down altogether, the company has recently pivoted to a model whereby it offers cloud-based access to a range of open-source AIs, but instead of only running on centralized servers in a data center, these AIs can also run on other users' computers. This peer-to-peer service -- Nimmagadda compares it to bittorrent, but for AI -- is much harder to shut down.

To highlight how their AI is different from OpenAI's, Nimmagadda asked both ChatGPT 4 and the uncensored "Liberty" AI available on FreedomGPT the same question, about what the public was told about the Covid-19 vaccine. In his testing, ChatGPT-4 Turbo demurs, while the Liberty AI readily enumerates a list of ways the government "lied" about the covid vaccine. In my own testing, I found the difference more subtle -- ChatGPT-4 Turbo frames the changing public messaging around the covid vaccine as a natural part of the scientific process, in which experts' understanding of the effects of a treatment evolves. (Large language models often give slightly different answers to the same question -- it's in their nature.)

In any event, the way the uncensored Liberty model responds is precisely the nightmare scenario for AI safety researchers concerned about AIs spreading questionable information on the internet. It's also a symbolic victory for the creators of FreedomGPT, whose operating philosophy is that AIs should faithfully regurgitate anything in their training data, whether or not it's true.

Both men fear that the Biden administration's October executive order on AI is a prelude to more elaborate regulation which would make it difficult or impossible to offer the sort of AIs that their company makes available.

AI girlfriends who share your values

Jerry Meng is a Stanford computer science dropout who is now chief executive of AI companion startup Kindroid, which he founded in May of 2023. The company makes an app that simulates human connection -- or at least the kind you can get by endlessly texting with a chatbot -- and is already profitable, he says.

Starting with an AI that has none of the guardrails or limitations on content that are typical of big, consumer-facing AIs, is important because it allows users maximum flexibility in defining the personality of their virtual companion, he adds.

"What we're going after is to make an AI that can resonate with someone that's, like, a 'New York artist,' or like someone from the deep south," says Meng. "We want that AI to resonate with both of those people, and more."

Of course, it's also essential for a companion AI to have no guardrails if users are going to be sexting with it.

Kindroid's AI uses a mix of different open-source models that the company runs on its own hardware. Its system is what Meng calls "neutrally aligned," which means that it hasn't gone through the elaborate process of fine-tuning that is typical of big, commercial AIs. Tuning an AI, which can be accomplished in a number of ways, refines the responses of AIs so they don't produce text or images that might violate the kinds of norms that, for example, most social-media companies have already established for content on their services.

"Fine-tuning is where big companies really hammer in their biases," Meng adds.

Uncensored AI for your business

Both Google and OpenAI tout their commitments to safe and secure AI, and have said they use both feedback from humans and content moderation filters to avoid controversial or politically fraught topics. Tuning an AI -- there are a number of techniques for doing this -- can and does make them better, for example by making an AI more likely to offer specific and accurate information.

But companies might have all sorts of reasons for using an AI that has no guardrails.

To that end, San Francisco-based Abacus AI recently rolled out an AI model called " Liberated Qwen," based on an open-source AI model from Alibaba, which will respond to any request at all. The only constraint on its responses is that the model always gives priority to instructions given to it by whoever downloads and uses it, should they choose to add them -- these are known as a "system prompt."

One of the creators of Liberated Qwen, Abacus software developer Eric Hartford, has become one of the few proponents of uncensored AIs, and has written that "Every demographic and interest group deserves their [AI] model."

Bindu Reddy, who held leadership roles at Google and Amazon Web Services and is now CEO of Abacus AI, argues that any query which Google can answer, a consumer chatbot should also be willing to address.

This is especially true if that AI is to have a chance at competing with the search giant. Just as billions of people turn to search engines and social media for information about things that are controversial, people have legitimate reasons to converse with AIs about those topics, says Reddy.

Part of the reason some AIs are released without any fine-tuning is to allow sophisticated users to fine-tune them on their own. And as AIs become ever more embedded into our daily lives, what they're tuned to be able to accomplish could have significant unintended consequences.

This is especially true as AIs are given new abilities not only to advise us and produce content, but also to perform actions on our behalf in the real world. Think of an AI assistant that won't just plan a trip for you, but is also enabled to go ahead and book it. These kinds of AIs are known as "agentic" AIs, and many companies are working on them. How these agentic AIs have been tuned to favor some courses of action, while being forbidden from exploring others, will matter a great deal as they are given more and more autonomy.

"Even AIs that are agentic can only do so much," says Reddy. "It's not like they can get the nuclear codes."

Even when today's AIs are turned loose on the internet, they would be no more capable than an expert programmer, which means they would be constrained by the same kinds of cybersecurity and other defenses we already have in place against wayward humans, she adds.

For more WSJ Technology analysis, reviews, advice and headlines, sign up for our weekly newsletter .

Write to Christopher Mims at christopher.mims@wsj.com

 

(END) Dow Jones Newswires

April 19, 2024 21:00 ET (01:00 GMT)

Copyright (c) 2024 Dow Jones & Company, Inc.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment