
For nearly three decades, Section 230 of the Communications Decency Act has functioned as the internet industry’s most powerful legal shield — a provision so broad that critics call it a “get-out-of-jail-free card” for online platforms. On April 10, 2026, Massachusetts’ highest court drew a careful new line, ruling that the statute does not protect Meta Platforms from a lawsuit alleging that it deliberately designed Instagram to hook children, knew about the harms, and lied to the public about them.
The unanimous decision by the Supreme Judicial Court (SJC) is among the most significant rulings yet to take on the question of when social media platforms can be held accountable for the design choices that shape how billions of people — including minors — experience their products.
Background: How the Case Began
The Commonwealth of Massachusetts filed suit against Meta Platforms, Inc. and Instagram, LLC on October 24, 2023. State authorities alleged that Meta had engineered Instagram to exploit the psychological vulnerabilities of teenagers in order to maximize advertising revenue, while simultaneously misleading the public and lawmakers about its platform’s safety record.
Instagram is used by over 33 million young people nationwide, including more than 300,000 daily active users in Massachusetts between the ages of thirteen and seventeen. The platform derives substantially all of its revenue from advertising — paid for by companies that buy “advertisement impressions,” essentially the number of times an ad appears on a user’s screen. The more time a user spends scrolling, the more impressions Meta can sell. According to the Commonwealth, that revenue model created a dangerous financial incentive to keep young users — neurologically more susceptible to compulsive behavior — glued to the app.
The Four Claims Against Meta
The Commonwealth brought four counts under Massachusetts consumer protection law and public nuisance doctrine.
Count I alleged unfair business practices through addictive design. Instagram’s features — infinite scroll, autoplay, ephemeral content that disappears after 24 hours, and notifications delivered on an intermittent variable reward schedule — deliberately exploit teenagers’ neurological vulnerabilities. The complaint specifically argued these features trigger dopamine responses, prey on teens’ fear of missing out, and induce compulsive use that disrupts sleep, stunts socioemotional development, and causes mental health challenges.
Count II alleged deceptive business practices through false safety claims. While internal company studies showed Instagram caused addiction and harmed youth, Meta’s executives — including its CEO and global head of safety — publicly assured lawmakers, investors, and families that the platform was safe and that user wellbeing was a priority. The complaint identified specific public statements it called deliberately misleading and contradicted by internal data.
Count III alleged both deceptive and unfair practices related to users under thirteen. Meta publicly claimed it bars underage users from the platform, while knowing that hundreds of thousands of such accounts existed and that its age-verification measures were ineffective. It also allegedly refused to invest in better safeguards despite knowing younger users faced acute harm from the platform’s addictive features.
Count IV alleged public nuisance. By engaging in all of the above conduct, the Commonwealth argued Meta created and maintained a public nuisance of youth addiction to Instagram, significantly interfering with the health and safety of young people across Massachusetts.
The Legal Battleground: What Is Section 230?
Meta’s primary defense was Section 230(c)(1) of the Communications Decency Act of 1996, which states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
The provision was born from a specific legislative anxiety in the mid-1990s. Two cases — Cubby v. CompuServe and Stratton Oakmont v. Prodigy — had created a perverse incentive for internet platforms. In Prodigy, the court found that because the service exercised some editorial control over its bulletin boards, it could be held liable as a publisher for defamatory posts by third-party users. The implication: the more a platform tried to moderate content, the more legal exposure it took on. Congress enacted Section 230 to eliminate that chilling effect, allowing platforms to moderate without becoming publishers in the legal sense.
Over the decades since, Section 230 has been interpreted broadly to shield platforms from nearly any claim arising from user-generated content. Meta argued the same principle applied here: any liability connected to what appears on Instagram necessarily implicates its role as a publisher of third-party content.
The Three-Part Test for Immunity
To invoke Section 230(c)(1) protection, a defendant must satisfy three requirements. First, it must be a provider of an “interactive computer service.” Second, the claim must treat the defendant as the publisher or speaker of information. Third, that information must have been provided by another content provider — not the platform itself.
Both sides agreed Meta satisfied the first requirement. The entire fight centered on the second and third.
The Court’s Core Legal Reasoning
The heart of the dispute was what it means for a claim to “treat” a platform “as the publisher” of information. Meta urged the broadest possible reading: any claim implicating a platform’s editorial decisions — including algorithmic choices about what to show, when, and to whom — should be immune.
The SJC rejected this. Drawing on the common-law roots of publisher liability, the court held that Section 230(c)(1) immunity requires two distinct elements: a dissemination element (the defendant published information to a third party) and a content element (the claim seeks to hold the defendant liable for the content of that information). Publisher liability at common law has always been a form of vicarious liability — holding an intermediary responsible for the harmful nature of what a third party said, not simply for the act of distributing it.
The implication is significant: a platform’s general publishing activities, including algorithmic ones, do not automatically trigger Section 230 immunity. Only claims targeting liability for harmful content itself — the actual words, images, or information provided by users — fall within the statute’s protection. As the court put it, quoting the Fourth Circuit: “Protection under Section 230(c)(1) extends only to bar certain claims imposing liability for specific information that another party provided.”
On Count I, the court found the addictive design claim does not seek to hold Meta liable for the content of any third-party post. Infinite scroll, autoplay, ephemeral stories, and variable reward notifications are about how information reaches users, not what the information says. The claim is, in the court’s words, “indifferent as to the content published.” A notification engineered to trigger a dopamine response is manipulative regardless of whether the underlying post is about sports, politics, or food. Meta’s counterargument — that without third-party content these features could not cause harm — was dismissed as a “but-for” causation theory that courts have consistently rejected.
On Counts II and III, the deception claims were even more straightforward. Section 230(c)(1) protects against liability for others’ speech. It has never shielded a platform’s own affirmative misrepresentations. Meta’s public statements about safety were Meta’s own speech, not third-party content, and no immunity attached. The defective age-gating design claim similarly survived because the alleged harm flows from a broken verification mechanism, not from the content of any message a child might encounter.
Because Counts I through III survived, the public nuisance claim in Count IV — which was entirely predicated on the same conduct — also proceeded.
The Procedural Question: Could Meta Even Appeal?
Before reaching the merits, the court addressed whether Meta had the right to appeal at all before a final judgment. Meta invoked the “doctrine of present execution,” an exception that allows immediate appeal where the right being claimed is the right not to be sued in the first place — a right that would be permanently lost if litigation continued to trial.
The SJC agreed. Reading Section 230(e)(3) — which states “no cause of action may be brought and no liability may be imposed” — the court held the statute grants immunity from suit itself, not merely from an adverse verdict. That immunity is meaningless if it can only be vindicated after a full trial. The appeal was properly before the court.
Why This Decision Matters
The Massachusetts ruling arrives as courts across the country and Congress are reconsidering Section 230’s scope. A federal multidistrict litigation making nearly identical claims against Meta is currently pending before the Ninth Circuit, with oral argument held in January 2026. The SJC expressly declined to follow the district court’s broader reasoning in that case, finding it insufficiently grounded in the common-law history of publisher liability.
The decision does not eliminate Section 230 as a defense. It establishes, at least in Massachusetts, that the immunity has principled limits tethered to what Congress actually intended: protecting platforms from liability as intermediaries for the harmful content of their users — not from accountability for their own design decisions, their own misrepresentations, or engineering choices that allegedly exploit children.
The case now returns to Superior Court, where all four claims will proceed to discovery and potentially trial. For a company that built a global empire on the attention of young people, the legal reckoning may be just beginning.
IF YOU NEED A LAWYER CONTACT US


Leave a Reply