If somebody wants to use my online content to train their AI without my consent I want to at least make it difficult for them. Can I somehow “poison” the comments and images and stuff I upload to harm the training process?

  • borth@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    8 days ago

    Images can be “glazed” with a software called “Glaze” that adds small changes to the images, so that they are unnoticeable to people, but very noticeable and confusing for an AI training on those images. glaze.cs.uchicago.edu

    They also have another program called Nightshade that is meant to “fight back”, but I’m not too sure how that one works.

    • Lurking Hobbyist🕸️@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      8 days ago

      From my understanding, you choose a tag when nightshading, say hand cuz a handstudy, and when the bots take the drawing, they get poisoned data - as nightshade distorts what it “sees” (say, a human sees a vase with flowers, but it “sees” garbage bag). If enough poisoned art is scrapped, then the machine will be spitting out garbage bags instead of flower vases on dinner tables.