

We are?


We are?


It is obvious, but we have so many liars lying to everyone and each other about AI that they get away with it. Skilled bullshitters shitting up clouds of smoke and using every manipulation tactic in the world.
Some hard evidence makes it easier to prevent their damage.


Don’t ask for advanced advice for your business from forums hire a consultant jesus christ


I keep seeing the “it’s good for prototyping” argument they post here, in real life.
For non-coders it holds up if you ignore the security risk of someone running literally random code they have no idea what does.
But seeing it from developers, it smells of bullshit. The thing they show are always a week of vibing gave them some stuff I could hack up in a weekend. And they could too if they invested a few days of learning e.g. html5, basic css and read the http fetch doc. And the learning cost is a one-time cost - later prototypes they can just bang out. And then they also also have the understanding needed to turn it into a proper product if the prototype pans out.


Instead of waiting a few more years for Linux to reach the level of ease-of-use needed to overtake Windows, MS is being sporty by moving the goal closer.


I can totally believe that nobody else felt like bribing paying out of their nose to have a Guinness employee fly over and look at a small computer and go “yep its small”.


Don’t be so negative! It’s also found a huge market in scams. Both for stealing celebrity likenesses, and making pictures and video of nonexistent products.


They try to do security the same way, by adding “pwease dont use dangerous shell commands” to the system prompt.
Security researchers have dubbed it “Prompt Begging”


Agree, the term is misleading.
Talking about hallucinations lets us talk about undesired output as a completely different thing than desires output, which implies it can be handled somehow.
The problem it the LLM can only ever output bullshit. Often the bullshit is decent and we call it output, and sometimes the bullshit is wrong and we call it hallucination.
But it’s the exact same thing from the LLM. You can’t make it detect it or promise not to make it.


Wow so next time I have a burning need for agentic experiences in my life I know a product exists to serve my need.
The awards are fun when I compare my vote list with my friends’ vote list, but the actual awards are just annoyingly pointless.
The winner of each category is predictable immediately. It will only ever be the game with the highest number of players and the category doesn’t matter.
If GTA6 comes out in 2026 and posts “we are hoping to win the Emotional Indie Platformer Award”, it’s going to win the Emotional Indie Platformer Award.
It’s baffling how people in the US accept and even adopt the language of NDAs being trade secrets. They aren’t. It’s a weapon to make it harder for people to leave.


TBH he probably knows he is lying, but is making confusing claims in order to push some other agenda.
Probably firing core people to save money while maintaining plausiblish deniability that this won’t do irrepairable damage.
Or just to get himself approval for amassing subordinates for a little kingdom, by displaying an ambitious “plan”.


The expensive autocomplete can’t do this.
AI markering all wants us to believe that spoon technology is this close to space flight. We just need to engrave the spoons better. And gold plate them thicker.
Dude who wrote that doesn’t understand how LLMs work, how Rust works, how C works, and clearly jack shit about programming in general.
Rewriting from one paradigm to another isn’t something you can delegate to a million monkeys shitting into typewriters. The core and time-consuming part of the work itself requires skilled architectural coding.


Yeah what you say makes sense to me. Having it make a “wrong start” in something new is useful, as it gives you a lot of the typical structure, introduces the terminology, maybe something sorta moving that you can see working before messing with it, etc.


This was a very directed experiment at purely LLM written maintainable code.
Writing experiments and proof of concepts, even without skill, will give a different calculation and can make more sense.
Having it write a “starting point” and then take over, also is a different thing that can make more sense. This requires a coder with skill, you can’t skip that.


I’ve been coding for a while. I did an honest eager attempt at making a real functioning thing with all code written by AI. A breakout clone using SDL2 with music.
The game should look good, play good, have cool effects, and be balanced. It should have an attractor screen, scoring, a win state and a lose state.
I also required the code to be maintainable. Meaning I should be able to look at every single line and understand it enough to defend its existence.
I did make it work. And honestly Claude did better than expected. The game ran well and was fun.
But: The process was shit.
I spent 2 days and several hundred dollars to babysit the AI, to get something I could have done in 1 day including learning SDL2.
Everything that turned out well, turned out well because I brought years of skill to the table, and could see when Claude was coding itself into a corner and tell it to break up code in modules, collate globals, remove duplication, pull out abstractions, etc. I had to detect all that and instruct on how to fix it. Until I did it was adding and re-adding bugs because it had made so much shittily structured code it was confusing itself.
TLDR; LLM can write maintainable code if given full constant attention by a skilled coder, at 40% of the coder’s speed.
No way to know for sure based on this. If you used any app that “works with” WhatsApp in any way, you could be affected.
Fun experiment: Ask Google if there are more stars in the solar system than grains of sand in a glass of water. See the AI confidently say “yes” and then refresh the query and see it confidently say “no”.