Meta Staff Called Themselves ‘Pushers’ as Company Hid Kids’ Mental Health Risks, Unsealed Records Show


Parents, teachers and even teens themselves have wondered how much social media is really affecting young people’s minds. Now, newly unsealed court documents are pulling back the curtain on what some tech employees were saying behind closed doors—comparing apps to drugs, debating research that linked Facebook use to worse mood, and arguing over how far to go with child safety. The filings do not settle every debate about social media and mental health, but they give a rare look at how growth, engagement and youth wellbeing have been weighed inside the companies that shape so much of children’s online lives.

Tech Insiders Described Their Own Apps as “Addictive”

Newly unsealed court filings in a major lawsuit against Meta, Google, TikTok and Snap claim that, behind the scenes, some staff talked about their apps almost like addictive substances. In one internal Meta chat, a senior researcher allegedly wrote “IG is a drug,” and a colleague replied, “We’re basically pushers.” At Snap, an employee joked about “mass psychosis” around “Snapstreaks,” and TikTok documents referenced criticism that its Chinese version is like “spinach” for kids, while the version used elsewhere is more like “opium” for attention.

The filings say this was not just dark humour. Expert reports describe internal research and messages suggesting that leaders understood how endless feeds, notifications and streaks could worsen anxiety, depression, body image issues, disordered eating and even suicidal thoughts in teens. According to the plaintiffs, features were still tuned to keep young people scrolling longer, not to reduce harm, and parents were not clearly told what the company knew.

The lawsuit also alleges serious failures on child safety. Meta is accused of allowing high strike thresholds before banning accounts suspected of sex trafficking and delaying changes that would have limited contact from unknown adults because those changes might reduce engagement.

All of the companies reject these claims. Meta and others argue that the filings cherry-pick quotes, misread internal discussions and ignore the safety tools they have built for teens and families. The court case is ongoing, but the picture emerging from the documents mirrors what many parents and young users already sense: the way these platforms are built can feel less like a neutral tool and more like something designed to be hard to put down, even when it starts to hurt.

Inside Project Mercury: What Happened After Meta Saw Signs of Harm

A central claim in the filings involves “Project Mercury,” a 2020 Meta study run with research firm Nielsen. Participants were asked to stop using Facebook for a week. Internal documents say those who deactivated reported lower depression, anxiety, loneliness and social comparison after just seven days off the platform. Instead of expanding or publishing the work, Meta allegedly shut the project down and blamed the existing media narrative for the negative results.

Inside the company, not everyone agreed. One staff researcher reportedly wrote, “The Nielsen study does show causal impact on social comparison,” followed by a sad-face emoji. Another allegedly compared keeping the findings quiet to the tobacco industry doing research, learning cigarettes were harmful and then hiding the information. Yet, according to the filings, Meta later told Congress it had no way to quantify whether its products were harming teenage girls.

Plaintiffs say this fits into a wider pattern of choosing growth over safety. The documents claim Meta:

  • Designed youth safety tools in ways that were weak or rarely used.
  • Blocked tests of stronger protections when they might reduce engagement.
  • Recognized that boosting teen engagement meant serving more harmful content, but proceeded anyway.
  • Delayed efforts to restrict child predators’ contact with minors while asking safety teams to justify inaction.

Meta strongly disputes these allegations, arguing the study’s methods were flawed and that its teen safety work is effective, while the underlying records continue to be contested in court.

How Alleged Design Choices Put Young Users in Harm’s Way

The filings describe not just addictive features, but specific risks for children and teens that platforms allegedly saw and failed to address quickly. According to the court documents, Meta and other companies were aware of links between their design choices and problems like poor body image, self-harm, sexual exploitation and classroom distraction, yet still prioritised engagement.

Internal materials and expert reports cited in the case claim that platforms:

  • Amplified harmful content: Meta allegedly knew that tuning algorithms to increase teen engagement led to more content about eating disorders, body dysmorphia and self-harm, but continued to optimise for time spent in the app.
  • Tolerated underage use: Companies are accused of quietly benefiting from children under 13 using their platforms, despite official minimum age rules.
  • Were slow on sexual exploitation: One filing says Meta allowed accounts to be caught repeatedly trying to traffic people for sex before removal, calling the strike threshold “very, very, very high.” Efforts to limit contact between predators and minors were reportedly delayed.
  • Pushed teen use during school hours: Platforms allegedly worked to deepen student use while they were in class, even as schools reported attention and mental health issues.
  • Sought friendly voices: TikTok is accused of sponsoring the National PTA and then bragging internally that the group would back its public safety messaging.

Meta, TikTok and others reject these claims and say their child safety and reporting tools are strong and actively enforced.

What Meta and Other Platforms Say in Their Defense

Alongside the internal messages and research cited in the filings, there is a very different public story from Meta and other platforms. When criticised over youth mental health and safety, these companies point to feature rollouts, safety centers and partnerships with outside experts as proof that protecting young people is a priority.

Meta has argued that the lawsuits rely on “cherry-picked” internal quotes and misinterpretations. The company points to tools such as Instagram Teen Accounts, parental controls, and content filters as evidence that it invests heavily in safety. Meta has also said that studies like Project Mercury were stopped because of flawed methods, not because executives wanted to hide negative results.

Google makes similar claims about YouTube, stressing restricted modes, supervised accounts and default settings for younger users. TikTok has disputed interpretations of internal documents and highlighted its screen-time limits, age-based content ratings and family pairing controls. When filings describe alleged attempts to influence child-focused groups, TikTok says those documents are being taken out of context.

From a parent or teen’s perspective, this creates a confusing gap. On one side are legal filings and leaks suggesting companies knew far more about risks than they shared. On the other side are polished dashboards and safety toolkits that promise control. Understanding this gap is key to making sense of what these platforms can realistically offer—and what families cannot safely outsource to them.

Practical Ways Families Can Respond Right Now

The court fight is complex. Daily life is not. Most families need simple moves they can actually stick to, not perfect digital diets.

A practical starting point:

  • Protect sleep and mornings: Keep phones and tablets out of bedrooms at night and off for the first 30–60 minutes after waking. Good sleep and a calmer start to the day do more for mood than any app ever will.
  • Pick a few “screen-free anchors.”: Mealtimes, homework blocks, sports, or a set evening hour can be phone-free by default. It is easier to protect specific moments than to “use less” in general.
  • Talk about feeds, not just rules: Ask what keeps your child or teen scrolling, what shows up most, and how it makes them feel. If a feed is full of dieting, self-criticism or violence, sit together and unfollow, mute, or block those accounts and search for healthier ones.
  • Watch for signs it is “too much.”: Notice big mood drops after scrolling, fights about putting the phone down, slipping grades, or losing interest in friends and hobbies. These are cues to tighten limits and, if needed, check in with a health professional.

The Real Lesson for Families After the Meta Revelations

The unsealed documents do not just expose internal jokes or clumsy emails. They suggest that major platforms saw real signs of harm—ranging from worsened mood and body image to child safety risks—and still chose designs that kept young people scrolling. Even if courts spend years arguing over the details, one fact is hard to ignore: these products are built and tuned by companies that measure success in attention, not wellbeing. That alone is a strong reason to stop treating social apps as harmless background noise in kids’ lives.

For families, schools and young people, the most practical shift is mindset. Social media is a powerful commercial product, not a neutral space. It can connect, entertain and inform, but it can also quietly erode sleep, focus, confidence and safety if left entirely on autopilot. The call to action is simple and urgent: question the defaults, use the settings, set real boundaries, and talk honestly about what shows up on screens and how it feels. Policy changes and court rulings may come later; the choice to use these platforms with eyes open, not on blind trust, starts now.

Loading…


Leave a Reply

Your email address will not be published. Required fields are marked *