

Adversarial noise a fun topic and a DIY AI thing you can do to familiarize yourself with the local-hosting side of things. Image generating networks are lightweight compared to LLMs and are able to be run on a moderately powerful, NVIDIA, gaming PC (most of my work is done on a 3080).
LLM poisoning can also be done if you can insert poisoned text into their training set. An example method would be detecting AI scrapers on your server and sending them poisoned instead of automatically blocking them. Poison Fountain makes this very easy by supplying pre-poisoned data.
Here is the same kind of training data poisoning attack, but for images that was made by the researchers of University of Chicago into a simple windows application: https://nightshade.cs.uchicago.edu/whatis.html
Thanks to you comment I realized that my clipboard didn’t have the right link selected so I edited in the link to his github. ( https://github.com/bennjordan )





Ya, that seems reasonable.
I think it’s a pretty core democratic value that no person is worth more than another. A compromise of 1 million times should satisfy individuals with even the most acute case of wealth hoarding.