

Which, as I said, seems strange. Why don’t those businesses just download the torrents?
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit before joining the Threadiverse as well.


Which, as I said, seems strange. Why don’t those businesses just download the torrents?


Seems strange. Anna’s Archive makes their collection available for bulk download as torrent files, they shouldn’t need to “cut a deal” for access to that. Just download the torrent and now you’ve got the whole collection available locally.


If you look at the numbers in the article the majority “broke even” but significantly more companies experienced gains from AI than experienced losses from AI. The headline is crafted to bait clicks.


Only 12 percent reported both lower costs and higher revenue, while 56 percent saw neither benefit. Twenty-six percent saw reduced costs, but nearly as many experienced cost increases.
So 38% saw benefits from AI, whereas “nearly” 26% saw cost increases from it. One could just as easily write the headline “More companies experience increased benefits from AI than experience increased costs” based on this data but that headline wouldn’t get so many clicks.


I’d consider that a “lose” condition.
It’s possible for everyone to lose a war.


Reddit isn’t a court of law, mods don’t really have to follow any particular rules other than “be profitable for Reddit Inc.”


Youtube isn’t the way you think it should be, though.


By some standards WWIII is already in progress. And no, America isn’t winning. Its power and influence are contracting rapidly.


“Why did they take that feature away? I was busy abusing it!”
I have a sneaking suspicion that the vast majority of the people raging about AIs scraping their data are not raging about it being done inefficiently.
You’re thinking of “model decay”, I take it? That’s not really a thing in practice.
Raw materials to inform the LLMs constructing the synthetic data, most likely. If you want it to be up to date on the news, you need to give it that news.
The point is not that the scraping doesn’t happen, it’s that the data is already being highly processed and filtered before it gets to the LLM training step. There’s a ton of “poison” in that data naturally already. Early LLMs like GPT-3 just swallowed the poison and muddled on, but researchers have learned how much better LLMs can be when trained on cleaner data and so they already take steps to clean it up.
I have no idea what “established means” would be. In the particular case of the Fediverse it seems impossible, you can just set up your own instance specifically intended for harvesting comments and use that. The Fediverse is designed specifically to publish its data for others to use in an open manner.
Are you proposing flooding the Fediverse with fake bot comments in order to prevent the Fediverse from being flooded with fake bot comments? Or are you thinking more along the lines of that guy who keeps using “Þ” in place of “th”? Making the Fediverse too annoying to use for bot and human alike would be a fairly phyrric victory, I would think.
A basic Google search for “synthetic data llm training” will give you lots of hits describing how the process goes these days.
Take this as “defeatist” if you wish, as I said it doesn’t really matter. In the early days of LLMs when ChatGPT first came out the strategy for training these things was to just dump as much raw data onto them as possible and hope quantity allowed the LLM to figure something out from it, but since then it’s been learned that quality is better than quantity and so training data is far more carefully curated these days. Not because there’s “poison” in it, just because it results in better LLMs. Filtering out poison will happen as a side effect.
It’s like trying to contaminate a city’s water supply by peeing in the river upstream of the water treatment plant drawing from it. The water treatment plant is already dealing with all sorts of contaminants anyway.
I think it’s worthwhile to show people that views outside of their like-minded bubble exist. One of the nice things about the Fediverse over Reddit is that the upvote and downvote tallies are both shown, so we can see that opinions are not a monolith.
Also, engaging in Internet debate is never to convince the person you’re actually talking to. That almost never happens. The point of debate is to present convincing arguments for the less-committed casual readers who are lurking rather than participating directly.
Doesn’t work, but I guess if it makes people feel better I suppose they can waste their resources doing this.
Modern LLMs aren’t trained on just whatever raw data can be scraped off the web any more. They’re trained with synthetic data that’s prepared by other LLMs and carefully crafted and curated. Folks are still thinking ChatGPT 3 is state of the art here.


Carney didn’t “clap for the attack on Venezuela.” He called for international law to be followed, which should be an obvious rebuke to anyone who isn’t at a Trump level of understanding of how diplomacy is done.


People have been doing this to “protest” AI for years already. AI trainers already do extensive filtering and processing of their training data before they use it to train, the days of simply turning an AI loose on Common Crawl and hoping to get something out of that are long past. Most AIs these days train on synthetic data which isn’t even taken directly from the web.
So go ahead and do this, I suppose, if it makes you feel better. It’s not likely to have any impact on AIs though.
Ah, low numbers of seeds. Must’ve just not wanted to wait.