So the development of inorganic intelligence, considered by many as an inflection point in human civilisation is to be handed to business graduates who are historically proven to be capable of any level of atrocity in the name of corporate greed. America, fuck yeah.
AmericaGreed, fuck yeah.Don’t fool yourself. The USA lost the exclusivity deal on unchecked corpo greed a long time ago. This is a global issue now.
Always has been.
Yeah, the American tag was just a throwaway line, greed unchecked, insane and self-harming has always been with us. We let it sit with us around our camp fires like wolves but unlike wolves we never tamed it.
Removed by mod
Oh boy, how surprising.
The bait and switch classic.
I’m clutching my pearls as I type this.
No problem, after they release all the data collected under the excuse of public good and progress.
Sam:
“ClosedAI” rebrand when?
🤣
NopeAI
Open Your Wallet AI
I thought they were a for-profit company all this time.
Pretty much non-profit in name only. Some shady hybrid model.
OpenAI sure seems like a case study in how to grift everyone by masquerading as a non profit whilst actually enriching yourself and your shareholders, causing a whole new class of societal problems in the process.
Meh. I don’t think anyone that matters was really fooled.
Stop depending on these proprietary LLMs. Go to [email protected].
There are open-source LLMs you can run on your own computer if you have a powerful GPU. Models like OLMo and Falcon are made by true non-profits and universities, and they reach GPT-3.5 level of capability.
There are also open-weight models that you can run locally and fine-tune to your liking (although these don’t have open-source training data or code). The best of these (Alibaba’s Qwen, Meta’s llama, Mistral, Deepseek, etc.) match and sometimes exceed GPT 4o capabilities.
The issue with that method, as you’ve noted, is that it prevents people with less powerful computers from running local LLMs. There are a few models that would be able to run on an underpowered machine, such as TinyLlama; but most users want a model that can do a plethora of tasks efficiently like ChatGPT can, I daresay. For people who have such hardware limitations, I believe the only option is relying on models that can be accessed online.
For that, I would recommend Mistral’s Mixtral models (https://chat.mistral.ai/) and the surfeit of models available on Poe AI’s platform (https://poe.com/). Particularly, I use Poe for interacting with the surprising diversity of Llama models they have available on the website.
There are open-source LLMs you can run on your own computer if you have a powerful GPU.
What defines powerful? What if you don’t have the necessary hardware?
You can check Hugging Face’s website for specific requirements. I will warn you that lot of home machines don’t fit the minimum requirements for a lot of models available there. There is TinyLlama and it can run on most underpowered machines, but its functionalities are very limited and it would lack a lot as an everyday AI Chatbot. You can check my other comment too for other options.
And there are also free, online hosted instances of those same LLMs in a (relatively speaking) privacy-protecting format from DuckDuckGo, for anyone who doesn’t have a powerful GPU :)
i’m not so sure on the privacy of any of this.
Interesting. So they mix the requests between all DDG users before sending them to “underlying model providers”. The providers like OAI and Anthropic will likely log the requests, but mixing is still a big step forward. My question is what do they do with the open-weight models? Do they also use some external inference provider that may log the requests? Or does DDG control the inference process?
All requests are proxied through DuckDuckGo, and all personalized user metadata is removed. (e.g. IPs, any sort of user/session ID, etc)
They have direct agreements to not train on or store user data, (the training part is specifically relevant to OpenAI & Anthropic) with a requirement they delete all information once no longer necessary (specifically for providing responses) within 30 days.
For the Llama & Mixtral models, they host them on together.ai (an LLM-focused cloud platform) but that has the same data privacy requirements as OpenAI and Anthropic.
Recent chats that are saved for later are stored locally (instead of on their servers) and after 30 conversations, the last chat before that is automatically purged from your device.
Obviously there’s less technical privacy guarantees than a local model, but for when it’s not practical or possible, I’ve found it’s a good option.
That’s very open of them
Open to All Income.
They’ve been acting like that from the start 🤷🏻♂️
There was never another outcome.
Capitalism breeds one thing, and it certainly isn’t innovation, and it most definitely isnt not-for-profit innovation.
Capitalism is extremely good at breeding superficial, go-to-market innovation. It’s less good at funding the pure research that leads to major discoveries. But once it gets closer to engineering than to science, it’s highly effective. Even Marx commented on that.
So, ads in chat now?
‘subtle’ product recommendations
I’m Open AI and this is my favorite shop in the Citadel.
No kidding. 🙀
Shocking nobody
Well, apart from the people like me who thought they had always been one because they acted exactly like one.
They should also change their name to ClosedAI while they’re at it.
Ruh roh!