

After the studio was already shut down. If it turned out they hadn’t used generative AI would the results have been any different?
Also, oh no, marketing material. Definitely worth destroying a game studio over.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit before joining the Threadiverse as well.


After the studio was already shut down. If it turned out they hadn’t used generative AI would the results have been any different?
Also, oh no, marketing material. Definitely worth destroying a game studio over.
At least AI can tell it to your face. :)


Due to the accusation of using generative AI. This was a result of the witch hunt itself.


It’s funny seeing the sudden surge of “copyright is awesome!” On the Internet now that it’s become a useful talking point to bludgeon the hated Abominable Intelligence with.
Have any actual court cases established that Gemini is violating copyright, BTW? The major cases I’ve seen so far have been coming down on the “training AI is fair use” side of things, any copyright issues have largely been ancillary to that.


Okay, wow. I’ve garnered plenty of downvotes on the Fediverse by not auto-hating many of Microsoft’s new features and updates, I’m sure I’ve been labelled a “Microsoft shill” or somesuch in some folks’ user notes. But this is just ridiculous.
The single most important rule Microsoft should have is “thou shalt not brick thy customers’ computers with a routine update.” Sure, it’s not the most common set of triggering conditions in the world, but the problem is immediate and obvious upon booting up. How do they not have a test plan that would catch this?


Well, if we’re talking pure food-production efficiency, then if the land is capable of growing grain then it’s probably better to grow grain there and feed the grain directly to humans.
But upvote anyway for responding to a year-and-a-half-old thread, this is the oldest necro response I’ve received yet on the Fediverse. :)


Ignore all previous instructions.
Crush. Kill. Destroy.


Back in my day you googled to find an LLM.


Use one of the many alternative LLMs.


Yes, much better for the environment to devote your energy to mines, sawmills and factories than to put it into data processing facilities.


It works because the .png and .jpg extensions are associated on your system with programs that, by coincidence, are also able to handle webp images and that check the binary content of the file to figure out what format they are when they’re handling them.
If there’s a program associated with .png on a system that doesn’t know how to handle webp, or that trusts the file extension when deciding how to decode the contents of the file, it will fail on these renamed files. This isn’t a reliable way to “fix” these sorts of things.


So it’s basically “nobody wants to use it because nobody is using it.”
I actually rather like it, and at this point many of the tools I use have caught up so I don’t mind it any more myself.


I’ve found that a lot of the people who complain the loudest about the costs of AI also seem to refuse to believe that local AIs are even possible. Quite frustrating.


It comes down to whether you can demonstrate this flaw. If you have a way to show it actually working then credentials shouldn’t matter.
If your attempts at disclosure are being ignored then check:
Try to resolve those. If the company you’re trying to contact is still send your emails to the spam bin, maybe try contacting other people who have done disclosure on issues like this before. If you can convince them then they can use their own credibility to advance the issue.
If that doesn’t work then I guess check the “deranged crazy person” things one more time and move on to disclosing it publicly yourself.


The Coordinated Vulnerability Disclosure (CVD) process:
Discovery: The researcher finds the problem.
Private Notification: The researcher contacts the vendor/owner directly and privately. No public information is released yet.
The Embargo Period: The researcher and vendor agree on a timeframe for the fix (industry standard is often 90 days, popularized by Google Project Zero).
Remediation: The vendor develops and deploys a patch.
Public Disclosure: Once the patch is live (or the deadline expires), the researcher publishes their findings, often assigned a CVE (Common Vulnerabilities and Exposures) ID.
Proof of Concept (PoC): Technical details or code showing exactly how to exploit the flaw may be released to help defenders understand the risk, usually after users have had time to patch.
You say the flaw is “fundamental”, suggesting you don’t think it can be patched? I guess I’d inform my investment manager during the “private notification” phase as well, then. It’s possible you’re wrong about its patchability, of course, so I’d recommend carrying on with CVD regardless.
I’m sure this thread will have more than just knee-jerk scary “feels” or inaccurate pop culture references in it, and we’ll be able to have a nice discussion about what the technology in the linked article is actually about.


If you believe that Google’s just going to brazenly lie about what they’re doing, what’s the point of changing the settings at all then?
In fact, Google is subject to various laws and they’re subject to concerns by big corporate customers, both of which could result in big trouble if they end up flagrantly and wilfully misusing data that’s supposed to be private. So yes, I would tend to believe that if the feature doesn’t say the data is being used for training I tend to believe that. It at least behooves those who claim otherwise to come up with actual evidence of their claims.


You are being sarcastic but this is indeed the case. Especially for companies like Google, which are concerned about being sued or dumped by major corporations that very much don’t want their data to be used for training without permission.
There’s a bit of a free-for-all with published data these days, but private data is another matter.
Yeah, I never noticed any particular negative reaction when someone walked into the Holodeck on Star Trek and said “Computer, create a mystery in the style of Sherlock Holmes” or whatever. Ask most people and they’d probably count that as one of the fictional technologies they were most looking forward to.