In the course of a lawsuit brought by the New Mexico Department of Justice against Meta and its CEO, Mark Zuckerberg, internal Meta records pertaining to kid safety have been made public. The documents show that Meta knew about the vast amount of improper and sexually explicit content that was being shared between adults and juveniles, in addition to purposefully marketing its messaging platforms to youngsters.
The records, which were made public on Wednesday as part of an updated complaint, show several instances in which Meta staff members voiced internal concerns about the company’s private chat systems being used to exploit children and teenagers. Though they weren’t profitable, Meta neglected to prioritize adding precautions or explicitly ban child safety measures, despite being aware of the risks that Instagram and Messenger DMs posed to users who were underage.
Attorney General Raúl Torrez of New Mexico claimed in a statement to TechCrunch that Zuckerberg and Meta allowed child predators to use youngsters for sexual purposes. Concerns about Meta allowing end-to-end encryption protection for Messenger—which started rolling out last month—were recently brought up by him. Torrez brought up in a different file that Meta had neglected to address child exploitation on its platform and that children would be put at more risk by encryption if appropriate controls were not in place.
“For years, Meta employees tried to sound the alarm about how decisions made by Meta executives subjected children to dangerous solicitations and child exploitation,” Torrez continued. “Meta executives, including Mr. Zuckerberg, consistently made decisions that put growth ahead of children’s safety. While the company continues to downplay the illegal and harmful activity children are exposed to on its platforms, Meta’s internal data and presentations show the problem is severe and pervasive.”
Also see: Google pulls Binance and other global crypto apps from India store
Originally filed in December, the lawsuit alleges that Meta platforms like Instagram and Facebook have become “a marketplace for predators in search of children upon whom to prey,” and that Meta failed to remove many instances of child sexual abuse material (CSAM) after they were reported on Instagram and Facebook. Upon creating decoy accounts purporting to be 14-year-olds or younger, the New Mexico DOJ said Meta’s algorithms turned up CSAM, as well as accounts facilitating the buying and selling of CSAM. According to a press release about the lawsuit, “certain child exploitative content is over ten times more prevalent on Facebook and Instagram than it is on Pornhub and OnlyFans.”
“We want teens to have safe, age-appropriate experiences online, and we have over 30 tools to support them and their parents” a Meta spokeswoman said in response to the complaint, as reported by our sources. We’ve spent a decade working on these issues and hiring people who have dedicated their careers to keeping young people safe and supported online. The complaint mischaracterizes our work using selective quotes and cherry-picked documents.”
According to the released records, Meta deliberately attempted to entice kids and teenagers to use Messenger, hence reducing its safety safeguards. For instance, a 2016 presentation expressed concerns about the company’s declining teen user base, pointing out that they were spending more time on YouTube and Snapchat than Facebook. The presentation also included a strategy to “win over” new adolescent users. A Facebook executive objected to scanning Messenger for “harmful content,” citing a potential “competitive disadvantage vs other apps who might offer more privacy” in an internal email dated 2017.
The documents claim that Meta’s inability to safeguard teenage users from sexual exploitation was “all the more egregious” because it knew that youngsters were using its services. The company’s “End Game” was to “become the primary kid messaging app in the U.S. by 2022,” according to a 2020 presentation. It also mentioned how well-liked Messenger is by kids aged six to ten.
The fact that Meta acknowledged the problems with child safety on their platform is very damning. For instance, an internal presentation from 2021 projected that 100,000 kids were sexually harassed and exposed to sexually explicit content, such as images of adult genitalia, on Meta’s messaging platforms per day. In 2020, when an Apple executive claimed that their 12-year-old had been solicited on Instagram, Meta staff worried that the platform might be pulled from the App Store.
An internal document said, “This is the kind of thing that pisses Apple off.” Workers also inquired about Meta’s timeline for putting an end to “adults messaging minors on IG Direct.”
Another internal paper from 2020 disclosed that Instagram lacked the security measures put in place on Facebook, such as prohibiting “unconnected” adults from texting kids. The same security measures were “not prioritized” for Instagram. A Meta employee chastised Meta for considering the “big growth bet” of enabling adult relatives to interact with minors on Instagram Direct, calling it a “less than compelling” justification for not implementing safety precautions. Additionally, the staff member saw that grooming happened twice as frequently on Instagram than it did on Facebook.
In a March 2021 presentation on kid safety, Meta talked about grooming and said that Facebook and Messenger had “more mature” “measurement, detection, and safeguards” for grooming than Instagram did. The presentation referred to the issue as a “terrible experience for creators and bystanders” and stated that Meta was “underinvested in minor sexualization on IG,” notably in the form of sexual comments made on the postings of minor authors.
“Child exploitation is a horrific crime and online predators are determined criminals,” a Meta spokesperson told TechCrunch. “We use sophisticated technology, hire child safety experts, report content to the National Center for Missing and Exploited Children, and share information and tools with other companies and law enforcement, including state attorneys general, to help root out predators. In one month alone, we disabled more than half a million accounts for violating our child safety policies.”
For a long time, Meta has been under fire for not doing more to moderate CSAM. Major American social media companies are legally obligated to notify the CyberTipline of the National Center for Missing & Exploited Children (NCMEC) of any incidents of CSAM. The most recent data from 2022 released by NCMEC indicates that around 21 million reports of CSAM were submitted by Facebook, accounting for roughly 66% of all reports sent to the CyberTipline in that year. Representing about 85% of all reports submitted to NCMEC, the Meta platforms also comprise reports from WhatsApp (1 million) and Instagram (5 million).
With over 3 billion daily active users, Meta has an enormous user base, which may account for this excessive number. These figures, according to a Meta representative, are the outcome of proactive detection. Nevertheless, world authorities have claimed that Meta isn’t doing enough to lessen these millions of reports in response to a wealth of studies. Even though Meta reported to the Wall Street Journal in June that it has eliminated 27 networks of child sex workers in the previous two years, investigators were still able to find a large number of linked accounts that purchase, sell, and distribute CSAM. Meta’s recommendation algorithms continued to support CSAM in the five months following the Journal’s revelation; while some hashtags were removed by Meta, other pedophilic hashtags appeared in their stead.
42 state attorneys general in the US are suing Meta again in the meanwhile due to the platforms’ negative effects on kids’ mental health.
“We see that Meta knows that its social media platforms are used by millions of kids under 13, and they unlawfully collect their personal info,” California Attorney General Rob Bonta told TechCrunch in November. “It shows that common practice where Meta says one thing in its public-facing comments to Congress and other regulators, while internally it says something else.”