The Jury That Put the Feed on Trial
Why the age of addictive design may be nearing its bill-collection phase
I think the most important tech story of the past week came out of a courtroom, which is a fine little humiliation for an industry that prefers to speak as though history happens inside product launches. Silicon Valley spent years telling the public that the feed was basically weather. It was there, it was everywhere, and if a child got swept away in it, that was sad, regrettable, and somehow nobody’s fault in particular. Then, on March 25, 2026, a Los Angeles jury found Meta and YouTube liable in a youth addiction case and awarded $6 million to a young woman who said the platforms had addicted her as a child and worsened her mental health. One day earlier, a New Mexico jury hit Meta with $375 million over child safety and deceptive practices. Two verdicts in two days is not a fluke. That is a pattern beginning to speak, as described in this breakdown of the Los Angeles case and this broader reading of the two verdicts together.
What matters here is not the dollar amount. Six million dollars is barely a twitch for a company of Meta’s size. What matters is the sentence attached to the money. A jury was willing to treat the feed as a product with design features, foreseeable harms, and makers who could be blamed. That is a different grammar from the one the platforms have used for years. The old grammar said they host speech and users make choices. The new grammar says the loop itself may be defective. Once that sentence enters ordinary civic life, a great many engineers, lawyers, and professional excuse manufacturers will discover that “engagement” was a very expensive euphemism, as argued in this legal reading of the California verdict.
I. A week like this changes the mood
Most people do not wake up eager to discuss product liability doctrine. They wake up, check their phone, and try to get through the day without being quietly robbed of attention, patience, or peace. That is why this story lands. Everyone already suspected something was wrong. Parents saw bedtime disappear into scroll-hypnosis. Teachers saw attention span ground into sawdust. Teenagers learned to measure their worth through systems built by men who spoke about connection in public and retention in private. The courts did not invent that unease. They gave it legal language, which is what institutions do when they finally arrive at a truth the kitchen table already knows.
That mood shift is the first reason this week matters. For years, the platforms benefited from a cultural bluff. They acted as though their products were neutral environments, like digital parks where unfortunate things occasionally happened. The Los Angeles verdict cuts through that bluff. The plaintiff, identified in court as K.G.M., argued that she began using YouTube at six, opened Instagram at nine, and spiraled into compulsive use through her early teens. Her lawyers pointed toward familiar features such as infinite scroll, autoplay, notifications, recommendation loops, beauty filters, variable reward systems, and weak age barriers. The jury found Meta and YouTube negligent and also found that both companies acted with malice, oppression, or fraud. That is not a flirtation. That is a civic judgment, and the reporting on the case makes clear how sharply the jurors took the design question.
The New Mexico verdict made the same week feel even heavier. There, a jury ordered Meta to pay $375 million after finding that the company misled consumers about safety and enabled harm involving children on its platforms. The legal posture was different from the Los Angeles addiction case, yet the public lesson was similar. The companies had spent years insisting that harms flowed from bad users, bad families, or the unavoidable tragedy of scale. Juries in two states looked at the same empire and saw intention in the machinery. Silicon Valley prefers to sound inevitable. Juries prefer nouns and verbs. Someone built this. Someone knew things. Someone kept going. That is the whole melody.
II. The case was about design, and that is the hinge
The most important thing to understand about the Los Angeles case is that it was not mainly about posts. It was about architecture. That distinction sounds dry until one sees what it does. If the case were mainly about user-generated content, the firms could keep hiding behind Section 230 and mumbling about free expression until everyone in the room fell asleep. The plaintiff’s team went another route. They argued that the harm flowed from engineering choices. Infinite scroll was a choice. Autoplay was a choice. Notifications calibrated for return behavior were a choice. Recommendation systems that steer a vulnerable child toward more of the same material were a choice. Beauty filters that distort self-image were a choice. A product made of choices can be examined like any other product.
That is why the verdict feels larger than one plaintiff. It does not merely say that social media can hurt people. Everyone with a functioning nervous system knew that already. It says the system may hurt people because it was built to do precisely what it does. A post can wound, but the feed decides which wound gets reopened, which insecurity gets fed, which hour of weakness gets targeted, which child keeps scrolling after midnight. In one especially clear account of the case, the issue is framed around recommendation systems and the active steering of users toward material likely to deepen distress. That is the hinge. The platforms were never passive shelves. They were active sorting engines, reward engines, mood engines. A shelf does not study your pulse. A shelf does not learn what keeps you awake. A shelf does not tap you on the shoulder the moment your self-command is weakest.
The Section 230 angle reveals even more. For years, the firms benefited from category confusion. They wanted every criticism to look like an attack on speech moderation. The plaintiffs asked the simpler question: what if the dangerous part is not only what appears on the screen, but the loop that keeps a minor in front of it? As the legal analysis of the ruling explains, Judge Carolyn Kuhl had already signaled this path by distinguishing content claims from product-design claims. That distinction matters because it lets courts ask about what the company built rather than what a user posted. Once that happens, the conversation gets colder. Cold is good. Cold language is often the first sign that sentimentality is losing.
III. Six million is tiny. The precedent is not.
No one should pretend that $6 million is a mortal wound to Meta. The company can find that amount in the couch cushions of its empire. The money is not the danger. The danger is that one jury has shown another jury how to think. Reporting on the verdict notes that thousands of related cases remain in the pipeline. Even if that number changes through settlement or consolidation, the strategic fact remains. This was a test case, and the plaintiffs did not walk away empty-handed.
That is where the comparisons to tobacco start to make sense. People should be cautious with historical analogies because they are often used as stage props for mediocre argument. Still, the comparison has bite. Tobacco litigation became dangerous when the public stopped seeing cigarettes as mere consumer choice and started seeing them as engineered products sold under false or incomplete assurances. Something similar is underway here. The feed used to be treated as culture. Courts are starting to treat it as machinery. Machinery can be redesigned. Machinery can also be fined, restricted, and hauled through discovery until its makers start sounding less like priests of the future and more like men who misplaced some paperwork.
That is exactly the point made in the Center for Humane Technology’s response to the verdicts. The old immunity structure was designed to protect platforms from liability for user speech. It was not designed to bless every compulsion lever engineers could devise to keep children scrolling. Once states, plaintiffs, and juries begin focusing on design, deception, and product conduct rather than content moderation, the old shield becomes thinner. Lawmakers noticed this quickly. The political class has a gift for arriving late and then talking as though it discovered the continent. Still, once the legal frame changes, politicians follow, and with them come hearings, proposed reforms, and a sudden wave of public seriousness that would have been unthinkable a few years ago.
IV. Families knew the truth before the law did
One reason this story lands so hard is that it feels familiar. Many parents did not need seven weeks of testimony to know the phone had become a thief in the house. They had already watched a child wake to it, eat beside it, retreat into it, and drift under its instruction. They had watched bedtime turn into quiet captivity. They had watched self-image, mood, and patience begin taking orders from a recommendation engine. Families often knew the product was behaving like a trap long before institutions found the courage to say it aloud. The law, as usual, arrived late and looked pleased with itself for showing up.
What the platforms offered families for years was a shell game dressed up as moral seriousness. Parents should supervise. Users should exercise self-control. Teens should build resilience. Schools should educate. All of those claims contain some truth. They also function beautifully as methods for relocating blame away from the machine. If a child melted into the feed, the defect must lie in the child, the family, the school, or the culture. Never in the loop. That line now looks weaker because juries have begun saying the quiet part in public. A system built to maximize time-on-platform may be incompatible with the wellbeing of developing minds. One did not need a law degree to suspect this. Still, it is pleasant when the institutions finally stop behaving like bewildered furniture.
There is something morally clarifying in the plainness of the features at issue. Infinite scroll does not look sinister. Autoplay does not arrive twirling a moustache. Notifications are tiny. Filters look playful. Recommendation systems sound technical enough to bore a room into surrender. Yet whole forms of disorder are built out of small repeated choices. That is how decline often works. It does not enter with banners and drums. It enters by convenience. Tap here. Swipe there. Stay a little longer. As the “addiction by design” framing makes clear, the machinery is ordinary, which is precisely why it can become pervasive before anyone names the danger.
V. This is bigger than social media
I do not think this story ends with Meta or YouTube. Once courts and juries become comfortable asking whether a digital product was deliberately built to hook, soothe, flatter, and retain a vulnerable user, the question will spread. It will move toward AI companions, synthetic friends, therapeutic chatbots, game economies, productivity tools built around compulsion, and every polished system whose business model depends on keeping a person emotionally entangled. That is why some writers immediately connected the Meta verdicts to the coming wave of AI addiction cases. The design logic is similar. The dependency mechanisms are similar. The corporate instinct to speak in the language of care while monetizing attachment is very similar.
This is the part Silicon Valley should fear most. The old bargain was splendid while it lasted. Platforms could reshape attention, mood, habits, and childhood itself while speaking as though they merely hosted expression. That bargain is cracking. The jury in Los Angeles did not view the feed as neutral weather. It viewed it as something built by adults who could have made different choices and, according to the plaintiffs, chose profit over restraint. The jury in New Mexico reached a separate but adjacent conclusion about safety and deception. Put those together and the pattern is hard to miss. The legal system is beginning to treat digital products like products. Obvious statements have an odd way of becoming revolutionary after a decade of strategic nonsense.
I would not call this a victory. The feed is still here. The children are still here. The appeals are coming. The lobbyists will earn their keep. The next decade will contain more euphemism than any decent civilization should have to endure. Still, something changed this week. A jury looked at a glowing loop built for children and called it what it was: a designed system with foreseeable harms. That is a colder way of speaking than the industry prefers. It is also a saner one. The age of unaccountable design is not over. The envelope has simply arrived. The bill is inside.

