Website operators are being asked to feed LLM crawlers poisoned data by a project called Poison Fountain.

The project page links to URLs which provide a practically endless stream of poisoned training data. They have determined that this approach is very effective at ultimately sabotaging the quality and accuracy of AI which has been trained on it.

Small quantities of poisoned training data can significantly damage a language model.

The page also gives suggestions on how to put the provided resources to use.

  • kadu@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    8 hours ago

    Samsung and Anthropic published independently created data showing how little bad data it takes to effectively poison very large models. LLMs pretend to be complex, but they aren’t, they’ll not continue to improve at the initial rate we got used to seeing. Just ask OpenAI.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      I’m not talking about LLMs. I’m talking about future developments learning on LLMs, eventually there will be some resolutions of conflicting knowledge and logical connections, otherwise they won’t become remotely as useful as advertised.