When I was young and starting out with computers, programming, BBS’ and later the early internet, technology was something that expanded my mind, helped me to research, learn new skills, and meet people and have interesting conversations. Something decentralized that put power into the hands of the little guy who could start his own business venture with his PC or expand his skillset.
Where we are now with AI, the opposite seems to be happening. We are asking AI to do things for us rather than learning how to do things ourselves. We are losing our research skills. Many people are talking to AI’s about their problems instead of other people. And they will take away our jobs and centralize all power into a handful of billionaire sociopaths with robot armies to carry out whatever nefarious deeds they want to do.
I hope we somehow make it through this part of history with some semblance of freedom and autonomy intact, but I’m having a hard time seeing how.


…And what about non LLM models like diffusion models, VL-JEPA, SSM, VLA, SNN? Just because you are ignorant of what’s happening in the industry and repeating a narrative that worked 2 years ago doesn’t make it true.
And even with LLM’s, even if they aren’t “thinking”, but produce as good or better results than real human “thinking” in major domains, does it even matter? The fact is that there will be many types of models working in very different ways working together and together will be beating humans at tasks that are uniquely human.
Go learn about ARC-AGI and see the progress being made there. Yes, it will take a few more iterations of the benchmark to really challenge humans at the most human tasks, but at the rate they are going that’s only a few years.
Or just stay ignorant and keep repeating your little mantra so that you feel okay. It won’t change what actually happens.
Yeah those also can’t think, and it will not change soon
The real problem though is not if LLM can think or not, it’s that people will interact with it as if it can, and will let it do the decision making even if it’s not far from throwing dice
We don’t even know what “thinking” really is so that is just semantics. If it performs as well or better than humans at certain tasks, it really doesn’t matter if it’s “thinking” or not.
I don’t think people primarily want to use it for decision making anyway. For me it just turbocharges research, compiling stuff quickly from many sources, writes code for small modules quite well, generates images for presentations, etc, does more complex data munging from spreadsheets or even saved me a bunch of time taking a 50 page handwritten ledger and near perfectly converting it to excel…
None of that requires decision making, but it saves a bunch of time. Honestly I’ve never asked it to make a decision so I have no idea how it would perform… I suspect it would more describe the pros and cons than actually try to decide something.
deleted by creator