The legal safeguard that technology giants have relied on for thirty years to avoid liability is now confronting an unprecedented challenge. Last week, Meta and Google's YouTube each lost jury trials in separate lawsuits, with combined damages totaling approximately $400 million. Simultaneously, a series of new lawsuits have been filed. Plaintiff attorneys are systematically dismantling the long-standing legal immunity enjoyed by tech platforms by finding ways to circumvent Section 230 of the U.S. Communications Decency Act. The U.S. Communications Decency Act was passed by Congress in 1996 and signed into law by then-President Bill Clinton. This legislation permits websites to act as content moderators without being held liable for the content they ultimately choose to retain. Over the past three decades, platforms including Meta, Google, TikTok, and Snap have all benefited from this provision, allowing them to define themselves as neutral platforms and thereby avoid a multitude of potential lawsuits. As the technology industry transitions from the era of traditional search and social networking into a new landscape dominated by artificial intelligence, the nature of legal risk is also subtly shifting. Platforms are no longer merely passive carriers of user content; they are now actively shaping user experience through algorithmic recommendations, autoplay features, and even AI-generated content.
Two Major Losses: Product Design Emerges as a Key Vulnerability
Last week, a plaintiff using the pseudonym Jane Doe filed a class-action lawsuit against Google, alleging that the company's AI models created their own summaries and links, disclosing the personal identifying information of Jeffrey Epstein's victims, including names, phone numbers, and email addresses. According to CNBC, the plaintiff's attorney, Kevin Osborne, stated that the lawsuit was filed because Google refused the plaintiff's request to remove the victims' contact information from the AI models. Osborne said the case must proceed quickly due to the rapid spread of the information.
We chose to file at that particular time because we needed to act as fast as possible to get this material taken down. People were receiving calls from complete strangers and even death threats. It's been an absolute nightmare.
Osborne added that the timing was a "pure coincidence" considering Meta's courtroom loss last week, but noted that a common thread in these cases is the plaintiffs' attempts to bypass Section 230. Osborne stated:
In his case, the argument is that the AI model is generating its own content, and this is an area the courts haven't deeply explored yet.
Last week, a jury in New Mexico found Meta liable in a case concerning child safety, while a separate jury in Los Angeles found Facebook's parent company, Meta, negligent in a personal injury case. Both companies have stated they plan to appeal last week's rulings.
Legislative Stalemate and Judicial Outlook
At the U.S. Congressional level, bipartisan proposals for various reforms to Section 230 of the Communications Decency Act have been introduced, but none have been enacted into law. During his first term, former President Donald Trump supported imposing greater restrictions on social media companies. The Biden administration also publicly stated during the 2020 campaign that the provision should be revoked. Nadine Farid Johnson, Policy Director at the Knight First Amendment Institute at Columbia University, attributes the legislative difficulty to the fact that "these issues are incredibly complex." Farid Johnson is currently urging Congress to adopt a more deliberate reform approach, suggesting that tech companies should only receive Section 230 protection if they meet specific conditions related to data privacy and platform transparency, among other criteria. She warned:
As platforms continue to expand their use of generative AI and enhance their algorithmic capabilities, the associated legal challenges will become increasingly complex. Our concern is that each technological iteration turns into a game of whack-a-mole.
Legal experts indicate that the aforementioned cases could potentially be appealed all the way to the U.S. Supreme Court, which would then issue a definitive ruling on whether platforms can retain this legal protection. David Greene, Senior Staff Attorney at the Electronic Frontier Foundation, pointed out that there is currently no legal consensus on whether product features are protected by Section 230, or even the First Amendment. Greene stated:
Simply labeling a feature as a 'design characteristic' is meaningless. If it is essentially speech, it is protected by both the First Amendment and Section 230.

