The Backlash Against AI Devices That Are Always Watching -- WSJ

Dow Jones03-14 17:30

By Tim Higgins

The all-seeing artificial intelligence envisioned by Silicon Valley is getting a second look.

Take a recent episode at a courthouse in Los Angeles when Meta Platforms chief Mark Zuckerberg showed up to testify in a case and members of his entourage were wearing the company's latest AI smart glasses.

The judge wasn't amused and said that any video recordings made by such devices needed to be deleted ASAP. "This is very serious," the judge reportedly admonished.

The glasses -- with chunky frames embedded with cameras and microphones -- are the way Zuckerberg imagines AI will be democratized for personal users. Eventually, he wants to offer something akin to god-like superintelligence on demand.

He isn't alone. Other personal devices are expected soon, including a mystery option from OpenAI.

The promise of AI is that it will become more and more useful because such devices allow it to see and hear your daily life, gobbling up that information, processing it and using it to inform you about your life.

The sales pitch is appealing. Frankly, it would be nice to have Tony Stark's Jarvis AI on tap to help navigate life. In my case, I suspect I'd end up using it for rather pedestrian things. Obviously, it would be helpful one day to have a visual prompt in my glasses to remind me of the name of the nice lady from the fifth floor of my apartment building who knows my dog yet whose name always escapes me. (Dolores?)

But at what cost to privacy? Mine and hers.

It is a question informed by having lived through Web 2.0, an era of technology fueled by user data. We're moving from a world of cookies following us around the web and smartphones gathering physical location data to something new: AI devices -- whether they're glasses, pins or pendants -- hoovering up everything in eye- and ear-shot.

Orwellian fears are growing about a new kind of surveillance state that was once just the stuff of nightmares. Such worries are at the heart of the feud between AI powerhouse Anthropic and the Defense Department. Although not everyone is worried so much about the government's role in all this.

Emil Michael, undersecretary at the Pentagon for research and engineering, has been quick to note that tech companies are the ones scraping the internet for user data. "If anyone collected bulk data on Americans, it's the AI companies, not us," he told CBS News last month.

He isn't wrong. And now those tech companies are dreaming of flooding the world with AI devices that see the world with a different kind of clarity.

Even Anthropic chief Dario Amodei has been cautioning that AI has the capabilities to take all of those fragmented pieces of information being captured digitally and knit them together in ways never before possible to give new insights about us all.

Specifically, Amodei, in an essay in January, wrote that AI could read and make sense of all the world's electronic communications and, maybe, even in-person communications if recording devices can be commandeered.

"It might be frighteningly plausible to simply generate a complete list of anyone who disagrees with the government on any number of issues, even if such disagreement isn't explicit in anything they say or do," he cautioned. "A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow."

Tech companies are aware of the risk, or, at least the optics around it, and say there are ways to prevent it. Qualcomm Chief Executive Cristiano Amon recently told me that one of the ways to address privacy concerns around personal AI gadgets is to process the data on-device rather than in the cloud where AI is mostly operating these days. He has a dog in the fight as he's the guy selling chips to AI device makers, such as Meta.

"We're developing chips ... for what we call ambient AI, perception AI. Those things run on your device, provide an analysis of the context on your device and inform the agent on your device," he told me for an episode of the "Bold Names" podcast. "And it's really going to be the control of the user, whether you wanted to send those things to the cloud or not."

That's a common argument by companies: Users get to opt into the service. It's their choice to share the data that is useful not only for customizing AI but also training the company models. It's an argument we've heard for the past generation with Web 2.0, though what starts as an option increasingly feels less so for offerings that are really about collecting user data.

Even before AI, Meta and its stable of social-media sites have long been at the heart of online privacy debates. In 2019, the Federal Trade Commission imposed a then-record $5 billion fine against the company for violating consumer privacy.

And the company's early steps into AI glasses are already facing criticism. Earlier this month, Meta was named in a lawsuit that seeks class-action status over concerns that data is being gathered from those glasses in ways that violate users' privacy.

The lawsuit, citing whistleblower complaints, alleges video captured on Meta's devices are being routed to contractors in Africa to manually view and label the data to train Meta's AI models. Among the videos in question? "People changing clothes, using the bathroom, engaging in sexual activity, handing financial information, and conducting other private activities inside their homes that no reasonable consumer would ever expect a stranger to watch," the lawsuit said.

Meta hasn't yet responded to the specific claims in court. In a statement to me, the company underscored that unless users choose to share media they've captured, such data remains on their devices.

"When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people's experience, as many other companies do," a company spokesman said. "We take steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed."

As we approach this brave new world, users are hearing the risks and trade offs. Frankly, being reminded of Dolores's name in the elevator will probably win the day.

Write to Tim Higgins at tim.higgins@wsj.com

 

(END) Dow Jones Newswires

March 14, 2026 05:30 ET (09:30 GMT)

Copyright (c) 2026 Dow Jones & Company, Inc.

At the request of the copyright holder, you need to log in to view this content

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment