

Comcast—in the top ten of the shittiest companies of all time that no one wants to have to deal with—is surprised that their “new” deal of, “be slightly less villainous, and expect all our problems to go away” isn’t working.
Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast


Comcast—in the top ten of the shittiest companies of all time that no one wants to have to deal with—is surprised that their “new” deal of, “be slightly less villainous, and expect all our problems to go away” isn’t working.


In hell, they just use Crow Pilot for this sort of thing.


Me, at life’s exit interview…
“Sooo… I’m regards to my, er, contributions to the good of the world… Does open source software count? What about all those times I made witty comments that made a few people smile? 😬”


If there’s so much gold you can pave the streets with it, it’s not very valuable.
Having said that, if we’re all living in a simulation, then having our “streets” (cables) paved with gold sounds fantastic 👍
Only thing better would be fiberoptic cable but that might not be possible since you can’t carry power over fiber 🤷
Aside: You can generate power from light traveling through fiber optic cable but it’s not the same thing as carrying power efficiently over copper or (better) gold.


Speaking about assumptions about the afterlife, people who believe in reincarnation typically believe that after you die, you get reincarnated. The assumption there is that it happens right away. What if it happens like a thousand years after you die or maybe an entire universe goes by?


Well, there’s five beats, so…
Ts-che-chu-chu-chk

Great! Just give us all these locations with at least 24 hours notice so we can make sure to comply 👍
The Void already has claims to all of us. The Void actually enjoys and needs the screaming, so it’ll be patient and wait until your warranty runs out; when your particular version stops getting patches and reaches EOL.
When that happens, it’ll welcome you, and you’ll get sent to /dev/random instead of the recycle bin or the trash can.
Note: You’ll have to wait for enough entropy in order to get to your next destination. How long that takes depends on how many people are screaming into the void at that time 🤷


This is super interesting. I think academia is going to need to clearly divide “learning” into two categories:
If you’re being tested on how well you memorized something, using AI to answer questions is cheating.
If you’re being tested on how well you understand something, using AI during an exam isn’t going to help you much unless it’s something that could be understood very quickly. In which case, why are you bothering to test for that knowledge?
If a student has an hour to answer ten questions about a complex topic, and they can somehow understand it well enough by asking AI about it, it either wasn’t worthy of teaching or that student is wasting their time in school; they clearly learn better on their own.
There’s a whole community about the body being just a shell and the brain being an egg that needs to crack: EGG IRL


I used to live down the street from a great big data center. It wasn’t a big deal. It’s basically just a building full of servers with extra AC units.
Inside? Loud AF (think: Jet engine. Wear hearing protection).
Outside: The hum of lots of industrial air conditioning units. Only marginally louder than a big office building.
A data center this big is going to have a lot more AC units than normal but they’ll be spread all around the building. It’s not like living next to an airport or busy train tracks (that’s like 100x worse).
No, it could be true. AI—especially with .NET—tends to generate exceptionally verbose code. Especially if you use “AI best practices” such as telling the AI to ensure 100% code coverage. Then there’s the, “let’s not use any 3rd party libraries, because we are Microsoft” angle.
.NET is already one of the most absurdly verbose languages (only other widely-used language that’s worse is Java). Copilot could easily push it over the top 🤣
All it would take would be for Microsoft to have AI rewrite some of the core libraries.


Every modern monitor has some memory in it. They have timing controllers and image processing chips that need DRAM to function. Not much, but it is standard DDR3/DDR4 or LPDDR RAM.


No shit. There’s easier ways to open the fridge.


unless you consider every single piece of software or code ever to be just “a way of giving instructions to computers”
Yes. Yes I do. That’s exactly what code is: instructions. That’s literally how computers work. That’s what people like me (software developers) do when we write software: We’re writing down instructions.
When you click or move your mouse, you’re giving the computer instructions (well, the driver is). When you type a key, that’s resulting in an instruction being executed (dozens to thousands, actually).
When I click “submit” on this comment, I’m giving a whole bunch of computers some instructions.
Insert meme of, “you mean computers are just running instructions?” “Always have been.”


In Kadrey v. Meta (court case) a group of authors sued Meta/Anthropic for copyright infringement but the case was thrown out by the judge because they couldn’t actually produce any evidence of infringement beyond, “Look! This passage is similar.” They asked for more time so they could keep trying thousands (millions?) of different prompts until they finally got one that matched enough that they might have some real evidence.
In Getty Images v. Stability AI (UK), the court threw out the case for the same reason: It was determined that even though it was possible to generate an image similar to something owned by Getty, that didn’t meet the legal definition of infringement.
Basically, the courts ruled in both cases, “AI models are not just lossy/lousy compression.”
IMHO: What we really need a ruling on is, “who is responsible?” When an AI model does output something that violate someone’s copyright, is it the owner/creator of the model that’s at fault or the person that instructed it to do so? Even then, does generating something for an individual even count as “distribution” under the law? I mean, I don’t think it does because to me that’s just like using a copier to copy a book. Anyone can do that (legally) for any book they own, but if they start selling/distributing that copy, then they’re violating copyright.
Even then, there’s differences between distributing an AI model that people can use on their PCs (like Stable Diffusion) VS using an AI service to do the same thing. Just because the model can be used for infringement should be meaningless because anything (e.g. a computer, Photoshop, etc) can be used for infringement. The actual act of infringement needs to be something someone does by distributing the work.
You know what? Copyright law is way too fucking complicated, LOL!


Hmmm… That’s all an interesting argument but it has nothing to do with my comparison to YouTube/Netflix (or any other kind of video) streaming.
If we were to compare a heavy user of ChatGPT to a teenager that spends a lot of time streaming videos, the ChatGPT side of the equation wouldn’t even amount to 1% of the power/water used by streaming. In fact, if you add up all the usage of all the popular AI services power/water usage that still doesn’t add up to much compared to video streaming.


Sell? Only “big AI” is selling it. Generative AI has infinite uses beyond ChatGPT, Claude, Gemini, etc.
Most genrative AI research/improvement is academic in nature and it’s being developed by a bunch of poor college students trying to earn graduate degrees. The discoveries of those people are being used by big AI to improve their services.
You seem to be making some argument from the standpoint that “AI” == “big AI” but this is not the case. Research and improvements will continue regardless of whether or not ChatGPT, Claude, etc continue to exist. Especially image AI where free, open source models are superior to the commercial products.


but we can reasonably assume that Stable Diffusion can render the image on the right partly because it has stored visual elements from the image on the left.
No, you cannot reasonably assume that. It absolutely did not store the visual elements. What it did, was store some floating point values related to some keywords that the source image had pre-classified. When training, it will increase or decrease those floating point values a small amount when it encounters further images that use those same keywords.
What the examples demonstrate is a lack of diversity in the training set for those very specific keywords. There’s a reason why they chose Stable Diffusion 1.4 and not Stable Diffusion 2.0 (or later versions)… Because they drastically improved the model after that. These sorts of problems (with not-diverse-enough training data) are considered flaws by the very AI researchers creating the models. It’s exactly the type of thing they don’t want to happen!
The article seems to be implying that this is a common problem that happens constantly and that the companies creating these AI models just don’t give a fuck. This is false. It’s flaws like this that leave your model open to attack (and letting competitors figure out your weights; not that it matters with Stable Diffusion since that version is open source), not just copyright lawsuits!
Here’s the part I don’t get: Clearly nobody is distributing copyrighted images by asking AI to do its best to recreate them. When you do this, you end up with severely shitty hack images that nobody wants to look at. Basically, if no one is actually using these images except to say, “aha! My academic research uncovered this tiny flaw in your model that represents an obscure area of AI research!” why TF should anyone care?
They shouldn’t! The only reason why articles like this get any attention at all is because it’s rage bait for AI haters. People who severely hate generative AI will grasp at anything to justify their position. Why? I don’t get it. If you don’t like it, just say you don’t like it! Why do you need to point to absolutely, ridiculously obscure shit like finding a flaw in Stable Diffusion 1.4 (from years ago, before 99% of the world had even heard of generative image AI)?
Generative AI is just the latest way of giving instructions to computers. That’s it! That’s all it is.
Nobody gave a shit about this kind of thing when Star Trek was pretending to do generative AI in the Holodeck. Now that we’ve got he pre-alpha version of that very thing, a lot of extremely vocal haters are freaking TF out.
Do you want the cool shit from Star Trek’s imaginary future or not? This is literally what computer scientists have been dreaming of for decades. It’s here! Have some fun with it!
Generative AI uses up less power/water than streaming YouTube or Netflix (yes, it’s true). So if you’re about to say it’s bad for the environment, I expect you’re just as vocal about streaming video, yeah?
Yeah it’s a common thought: An afterlife where people gather before going on to the next.
Usually, people think that the quality of your choices for the next life will be based on whatever criteria they think was most important in life. Someone who went out of their way to be nice will believe that it will be based on how nice you were. Whereas someone who spent their life accumulating money/power will assume it’s based on that.
For all we know, though, your “afterlife score” could be based on how many different sorts of food you tried, how many buttons you pressed, how far you traveled from where you were born, etc.
I actually have a novel idea about this concept: Dude dies and gets the red carpet treatment in the afterlife. He’s very happy about it but he doesn’t understand… He never got married and spent most of his life doing data entry and courtroom steganography.
Turns out, he got the high score in “button pressing.” He’s at the top of the leaderboard and this qualifies him for all sorts of “premium” reincarnation options. Not only that, but the gods intend to put his talents to use right away on “pressing issues.”