The plaintiffs’ brief alleges that Meta was aware that its platforms were endangering young users, including by exacerbating adolescents’ mental health issues. According to the plaintiffs, Meta frequently detected content related to eating disorders, child sexual abuse, and suicide but refused to remove it. For example, one 2021 internal company survey found that more than 8 percent of respondents aged 13 to 15 had seen someone harm themself or threaten to harm themself on Instagram during the past week. The brief also makes clear that Meta fully understood the addictive nature of its products, with plaintiffs citing a message by one user-experience researcher at the company that Instagram “is a drug” and, “We’re basically pushers.”

Perhaps most relevant to state child endangerment laws, the plaintiffs have alleged that Meta knew that millions of adults were using its platforms to inappropriately contact minors. According to their filing, an internal company audit found that Instagram had recommended 1.4 million potentially inappropriate adults to teenagers in a single day in 2022. The brief also details how Instagram’s policy was to not take action against sexual solicitation until a user had been caught engaging in the “trafficking of humans for sex” a whopping 17 times. As Instagram’s former head of safety and well-being, Vaishnavi Jayakumar, reportedly testified, “You could incur 16 violations for prostitution and sexual solicitation, and upon the seventeenth violation, your account would be suspended.”

  • FriendOfDeSoto@startrek.website
    link
    fedilink
    English
    arrow-up
    43
    ·
    17 hours ago

    If these things were clean cut, they would have been dragged to court already many times over. For messing with teenage girls for a laugh 10 years ago. For tacitly approving genocide in Myanmar. For cheating on their video views during the highly successful pivot to video. A good lawyer will get them out of this one too with but a slap on the wrist. They exist in a gray zone where they can fuck up as much as they want to without having to fear great consequences. Vote for politicians who want to regulate these companies more.

    • marx@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      4
      ·
      edit-2
      16 hours ago

      Americans, as a general population, don’t give a shit about Myanmar, may not know it even exists. They don’t really care or know about video view controversies and the like.

      One thing they do care A LOT about, is their kids. And the evidence is strong that Mark Zuckerberg and Meta executives knew children, on a mass scale, were being endangered by their products and deliberately, purposely allowed it to continue. They need to be prosecuted. If nobody even tries, then we’ve already lost.

      • FriendOfDeSoto@startrek.website
        link
        fedilink
        English
        arrow-up
        7
        ·
        12 hours ago

        Americans, as a general population, don’t give a shit about Myanmar, may not know it even exists.

        I would say that’s irrelevant for the crimes committed. And not just Americans would struggle to find Myanmar on a map. Or really care what’s going on there unless it’s rooting out phishing farms using abducted foreigners.

        I commend your view on the matter, that when it comes to their children they will do something. That may turn out to be true. However, that’s not going to be enough to get anyone at meta convicted under the current laws. They are running under a cover of diffuse authority and supervision internally and section 230 externally. Abhorent drug pusher comments are not admissions of guilt. They have good lawyers. We need new laws, more regulation, and fines that make Wall Street worried.

        • marx@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          I would say that’s irrelevant for the crimes committed.

          Irrelevant to the crimes themselves, but very relevant to the political pressure that can be applied to force action.

          We all know the law doesn’t just get applied because it should be. Especially not against the rich. It gets applied, or at least has a chance to be, when enough people are paying attention and demanding justice.

          Also, section 230 doesn’t apply to criminal prosecution (it may not even apply to the ongoing civil case), and there is strong evidence from the civil case that it was the executives themselves that explicitly chose not to implement safeguards that Meta employees were calling for.

          We need new laws, more regulation, and fines that make Wall Street worried.

          Absolutely. We need all of that plus way stronger antitrust. And we need the current law applied to bad actors, regardless of their riches.

        • architect@thelemmy.club
          link
          fedilink
          English
          arrow-up
          8
          ·
          14 hours ago

          America isn’t unique. Look at the British pedophiles the Japanese pedophiles and what they do to little boys in half the world like the Middle East. America LOL meanwhile they cut clits off little girls in Africa.

          No one cares about kids. Save your America shame.

      • architect@thelemmy.club
        link
        fedilink
        English
        arrow-up
        4
        ·
        14 hours ago

        I promise you people don’t give a shit about kids or their kids. I was once one and not a single fucking adult gave a shit about the sexual abuse. They get mad at the kids instead. I’ve been there. Actions speak louder than words. Kids are getting shot. Adults do not care about kids as a whole.

      • fishos@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 hours ago

        Honestly, I care a dick load more about Myanmar and enabling genocide than “but think of the children!”. That’s one of the laziest and most misued calls to action and at this point, I honestly dgaf when I hear it. It’s just propaganda at this point.

        Don’t be so quick to stereotype us. You’re insulting those of use who do pay attention.

  • ORbituary@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    16 hours ago

    Articles like this are exhausting. Yes. The answer is yes. Will it happen? Drum roll… No. It won’t happen. Need evidence? Look at the United States government.

  • lmmarsano@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    8
    ·
    16 hours ago

    Instagram had recommended 1.4 million potentially inappropriate adults to teenagers in a single day in 2022

    What does that even mean?

    That all still seems like catastrophizing over videos, images, text on a screen that can’t compel action or credible harm. I expect that lawsuit to go nowhere.

    • fonix232@fedia.io
      link
      fedilink
      arrow-up
      3
      ·
      9 hours ago

      If you seriously think that “videos, images, text on a screen can’t compel action” then you’ve just revoked every single right you had to be part of this discussion.

    • AmbitiousProcess (they/them)@piefed.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      12 hours ago

      Videos, images, and text can absolutely compel action or credible harm.

      For example, Facebook was aware that Instagram was giving teen girls depression and body image issues, and subsequently made sure their algorithm would continue to show teen girls content of other girls/women who were more fit/attractive than them.

      the teens who reported the most negative feelings about themselves saw more provocative content more broadly, content Meta classifies as “mature themes,” “Risky behavior,” “Harm & Cruelty” and “Suffering.” Cumulatively, such content accounted for 27% of what those teens saw on the platform, compared with 13.6% among their peers who hadn’t reported negative feelings.

      https://www.congress.gov/117/meeting/house/114054/documents/HHRG-117-IF02-20210922-SD003.pdf

      https://www.reuters.com/business/instagram-shows-more-eating-disorder-adjacent-content-vulnerable-teens-internal-2025-10-20/

      Many girls have committed suicide or engaged in self harm, at least partly inspired by body image issues stemming from Instagram’s algorithmic choices, even if that content is “just videos, and images.”

      They also continued to recommend dangerous content that they claimed was blocked by their filters, including sexual and violent content to children under 13. This type of content is known to have a lasting effect on kids’ wellbeing.

      The researchers found that Instagram was still recommending sexual content, violent content, and self-harm and body-image content to teens, even though those types of posts were supposed to be blocked by Meta’s sensitive-content filters.

      https://time.com/7324544/instagram-teen-accounts-flawed/

      In the instance you specifically highlighting, that was when Meta would recommend teen girls to men exhibiting behaviors that could very easily lead to predation. For example, if a man specifically liked sexual content, and content of teen girls, it would recommend that man content of underage girls attempting to make up for their newly-created body image issues by posting sexualized photos.

      They then waited 2 years before implementing a private-by-default policy, which wouldn’t recommend these teen girls’ accounts to strangers unless they explicitly turned on the feature. Most didn’t. Meta waited that long because internal research showed it would decrease engagement.

      By 2020, the growth team had determined that a private-by-default setting would result in a loss of 1.5 million monthly active teens a year on Instagram, which became the underlying reason for not protecting minors.

      https://techoversight.org/2025/11/22/meta-unsealed-docs/

      If I filled your social media feed with endless posts specifically algorithmically chosen to make you spend more time on the app while simultaneously feeling worse about yourself, then exploited every weakness the algorithm could identify about you, I don’t think you’d look at that and say it’s “catastrophizing over videos, images, text on a screen that can’t compel action or credible harm” when you develop depression, or worse.