I mean it’s kind of obvious… they are giving their LLMs simulators. access to test etc…, IE chat gpt can run code in a python environment and detect errors. but obviously it can’t know what the intention is, so it’s inevitably going to stop when it gets it’s first “working” result.
of course I’m sure further issues will come from incestuous code… IE AIs train on all publicly listed github code.
Vibe coders begin working on a lot of “projects” that they upload to github. now new AI can pick up all the mistakes of it’s predicesors on top of making it’s new ones.
I mean it’s kind of obvious… they are giving their LLMs simulators. access to test etc…, IE chat gpt can run code in a python environment and detect errors. but obviously it can’t know what the intention is, so it’s inevitably going to stop when it gets it’s first “working” result.
of course I’m sure further issues will come from incestuous code… IE AIs train on all publicly listed github code.
Vibe coders begin working on a lot of “projects” that they upload to github. now new AI can pick up all the mistakes of it’s predicesors on top of making it’s new ones.