

16GB of VRAM 🙄
Yawn
Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast
16GB of VRAM 🙄
Yawn
Anthropic didn’t lose their lawsuit. They settled. Also, that was about their admission that they pirated zillions of books.
From a legal perspective, none of that has anything to do with AI.
Company pirates books -> gets sued for pirating books. Companies settles with the plaintiffs.
It had no legal impact on training AI with copyrighted works or what happens if the output is somehow considered to be violating someone’s copyright.
What Anthropic did with this settlement is attack their Western competitor: OpenAI, specifically. Because Google already settled with the author’s guild for their book scanning project over a decade ago.
Now OpenAI is likely going to have to pay the author’s guild too. Even though they haven’t come out and openly admitted that they pirated books.
Meta is also being sued for the same reason but they appear to be ready to fight in court about it. That case is only just getting started though so we’ll see.
The real, long-term impact of this settlement is that it just became a lot more expensive to train an AI in the US (well, the West). Competition in China will never have to pay these fees and will continue to offer their products to the West at a fraction of the cost.
Any company that sees themselves “in the Business of selling printers and ink” thinks they’re strictly “in the business of selling ink.” Because selling ink is incredibly profitable.
This is exactly the type of situation that could be fixed with government regulation: Make it illegal for printer companies to sell ink/toner!
The day such regulation came into effect, all printers would double (not triple!) in price and we’d have like three standard cartridge sizes that you could source anywhere. They’d all be refillable and the world would be a better place.
You’ve obviously never tried to get any given .NET project working in Linux. There’s .NET and then there’s .NET Core which is a mere subset of .NET.
Only .NET Core runs on Linux and nobody uses it. The list of .NET stuff that will actually run on .NET Core (alone) is a barren wasteland.
If it’s written in C# that’s a huge turn-off though because that means it’s likely to only run on Windows.
I mean, in theory, it could run on Linux but that’s a very rare situation. Almost everything ever written in C# uses Windows-specific APIs and basically no one installs the C# runtime on Linux anymore. It’s both enormous and a pain in the ass to get working properly for any given C# project.
As an information security professional and someone who works on tiny, embedded systems, knowing that a project is written in Rust is a huge enticement. I wish more projects written in Rust advertised this fact!
Benefits of Rust projects—from my perspective:
Also, stuff that gets mis-labeled as AI can be just as dangerous. Especially when you consider that the AI detection might use such labels to train itself. So someone who’s face is weirdly symmetrical might get marked as AI and then have hard time applying for jobs, purchasing things, getting credit, etc.
I want to know what counts as AI. If someone uses AI to remove the background in an image or just to remove someone standing in the background is technically generative AI but that’s something you can do in any photo editor anyway with a bit of work.
Meh. Nothing in this article is strong evidence of anything. They’re only looking at a tiny sample of data and wildly speculating about which entry-level jobs are being supplanted by AI.
As a software engineer who uses AI, I fail to see how AI can replace any given entry-level software engineering position. There’s no way! Any company that does that is just asking for trouble.
What’s more likely, is that AI is making senior software engineers more productive so they don’t need to hire more developers to assist them with more trivial/time consuming tasks.
This is a very temporary thing, though. As anyone in software can tell you: Software only gets more complex over time. Eventually these companies will have to start hiring new people again. This process usually takes about six months to a year.
If AI is causing a drop in entry-level hiring, my speculation (which isn’t as wild as in the article since I’m actually there on the ground using this stuff) is that it’s just a temporary blip while companies work out how to take advantage the slightly-enhanced productivity.
It’s inevitable: They’ll start new projects to build new stuff because now—suddenly—they have the budget. Then they’ll hire people to make up the difference.
This is how companies have worked since the invention of bullshit jobs. The need for bullshit grows with productivity.
AI adds too many details. When a person draws an anime/cartoon character they will usually put in minimal details or they’ll simply paste the character on to an existing background (that could’ve been drawn by a different artist).
AI doesn’t have human limitations so it’ll often add a ton of unnecessary details to a given scene. This is why the most convincing AI-generated anime pictures are of one or two characters in a very simple setting (e.g. a plain street/sidewalk) or even a white or gradient background.
Humans can tell when art was put together by different artists. Such as when the background is a completely different style. AI doesn’t differentiate like that and will make the entire image using the exact style given by the prompt. So it’ll all look like it was “drawn” using the same exact style… Even though anime/cartoons IRL aren’t that uniform.
Incorrect. No court has ruled in favor of any plaintiff bringing a copyright infringement claim against an AI LLM. Here’s a breakdown of the current court cases and their rulings:
https://www.skadden.com/insights/publications/2025/07/fair-use-and-ai-training
In both cases, the courts have ruled that training an LLM with copyrighted works is highly transformative and thus, fair use.
The plaintiffs in one case couldn’t even come up with a single iota of evidence of copyright infringement (from the output of the LLM). This—IMHO—is the single most important takeaway from the case: Because the only thing that really mattered was the point where the LLMs generate output. That is, the point of distribution.
Until an LLM is actually outputting something, copyright doesn’t even come into play. Therefore, the act of training an LLM is just like I said: A “Not Applicable” situation.
Training an AI is orthogonal to copyright since the process of training doesn’t involve distribution.
You can train an AI with whatever TF you want without anyone’s consent. That’s perfectly legal fair use. It’s no different than if you copy a song from your PC to your phone.
Copyright really only comes into play when someone uses an AI to distribute a derivative of someone’s copyrighted work. Even then, it’s really the end user that is even capable of doing such a thing by uploading the output of the AI somewhere.
To be fair, they probably don’t want anything to do with a BootLoop.
Zawinski’s law: Every program attempts to expand until it can read mail. Those programs which cannot expand are replaced by ones which can.
This is just the modern equivalent: Intra-site messaging.
I guess it’s just too late for all those children that viewed porn. The piles of their dead bodies must be enormous!
I guess it’s just too late for those kids that viewed porn. What are they going to do with all those dead bodies?
Linux users: “See what we mean?”
Windows users: “La la la! I can’t hear you! Losing my data is clearly better than having to learn something new!”
Honestly—other than the multiple clipboards thing—it sounds like they just want KDE.
(And Pipewire)
Gaming is about to start requiring more VRAM too because of local AI. The two will become inseparable.