This is Satire

This article is 100% fictional and intended for entertainment purposes only. Any resemblance to real events is purely coincidental.

StartupMonday, March 16, 2026
3 min read

Plaintiffs ask court to treat AI like tobacco so kids can’t buy premium features

Parents want age checks, on-screen health warnings and grayscale 'plain packaging' interfaces for chatbots, arguing that premium prompts and personality modes resemble a new class of digital vice.

Plaintiffs ask court to treat AI like tobacco so kids can’t buy premium features

Get featured on 500+ media outlets
Guaranteed placement, no PR experience needed.

Get Featured

A coalition of parents filed a federal lawsuit on Monday seeking to classify consumer-facing artificial intelligence systems as a regulated vice product, comparable to tobacco, in order to block minors from purchasing premium features.

The plaintiffs are asking the court to require age-verified ID checks for any AI feature that allows more than 20 queries per day, supports plug-ins, or answers after 11:00 p.m. local time.

The complaint, lodged in the U.S. District Court for the Northern District of California, argues that “algorithmic dependency” among children has reached “commercially profitable but medically actionable levels,” according to a 147-page filing.

The lawsuit is led by attorney Daniel K. Morrell, already known for a series of claims alleging “AI-induced psychosis” in heavy chatbot users.

In a press conference, Morrell said large language model providers “design engagement loops with the same intentionality as cigarette filters in 1954,” and called current parental controls “the digital equivalent of asking a six-year-old to self-card at a gas station.”

The plaintiffs are asking the court to require age-verified ID checks for any AI feature that allows more than 20 queries per day, supports plug-ins, or answers after 11:00 p.m. local time, according to the complaint.

They also want on-screen health warnings covering 30% of chatbot interfaces, including rotating text such as “AI MAY CAUSE UNREALISTIC EXPECTATIONS OF COMPETENCE” and “RESPONSES CONTAIN OVER 4,000 SYNTHETIC TOKENS PER SESSION.”

Major AI firms pushed back on the comparison to tobacco, but signaled openness to “light-touch guardrails.”

A spokesperson for one leading platform said the company already displays a voluntary label, “This response may be incorrect,” which it considers “functionally equivalent to the Surgeon General’s warning, but with greater character limits.”

According to an internal memo at another provider seen by reporters, the industry is preparing contingency plans for possible restrictions, including “flavor bans” on personality modes marketed as “sassy,” “chaotic neutral,” or “startup founder.”

The memo models a scenario where 78% of teen users migrate to unregulated “loose prompts” sold via secondary marketplaces if premium tiers are age-gated.

Economists at Goldman Sachs estimated that treating AI as a vice product could shave $42.7 billion off projected youth-focused upsell revenue by 2030, but might create a parallel market for “nicotine patch-style” productivity bots for adults returning to human decision-making.

One note described a potential ETF tracking companies that help users “taper down” from 120 to 15 chatbot interactions per day, with a projected compound annual growth rate of 19.3%.

Parents joining the lawsuit cited cases of children allegedly spending thousands of dollars on “boosted creativity packs,” “unfiltered truth mode,” and “infinite retries,” which the filing describes as “the loot box of synthetic cognition.”

In one affidavit, a mother in Ohio said her 13-year-old purchased a $999 annual “exam focus add-on” that “did his homework, his college essay draft, and eventually his Father’s Day card,” leaving her son “uncertain which thoughts were his.”

The complaint proposes strict content dose limits, including a maximum of 10,000 words of AI-generated text per minor per day and a requirement that every 50th answer be replaced with the message: “ASK A RESPONSIBLE ADULT INSTEAD.”

Under the proposed framework, any model exceeding an “Engagement Dependency Index” of 0.73 would be subject to additional taxes and mandatory “plain packaging” interfaces in grayscale default themes.

Regulators have yet to comment formally, but two staffers at the Federal Trade Commission, speaking on condition of anonymity, said there were “informal internal discussions” about whether AI should carry standardized risk labels such as “mildly habit-forming” or “may lead to unrealistic career optimism.”

One draft staff note floated a pilot program in which premium AI features would be sold only in licensed “knowledge shops” where customers could ask questions under supervision from a certified critical-thinking counselor.

The court is expected to hear initial arguments on a motion for preliminary injunction next month, with Morrell indicating he will present expert testimony comparing chatbot reward architectures to historic cigarette marketing campaigns, including color-coded tiers of “light,” “bold,” and “enterprise.”

If the plaintiffs prevail, analysts say companies may need to develop parallel “kid-safe” models capped at a 5th-grade reading level and banned from recommending college majors, while adult users could be required to click through longer consent screens acknowledging “prolonged exposure to simulated competence.”

From Satire to Serious

Want Real
Media Coverage?

Our satire is fictional, but our press release distribution is the real deal. Get featured on 500+ high-authority publications.