• TheFogan@programming.dev
    link
    fedilink
    English
    arrow-up
    12
    ·
    18 hours ago

    I mean it’s kind of obvious… they are giving their LLMs simulators. access to test etc…, IE chat gpt can run code in a python environment and detect errors. but obviously it can’t know what the intention is, so it’s inevitably going to stop when it gets it’s first “working” result.

    of course I’m sure further issues will come from incestuous code… IE AIs train on all publicly listed github code.

    Vibe coders begin working on a lot of “projects” that they upload to github. now new AI can pick up all the mistakes of it’s predicesors on top of making it’s new ones.

  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    18 hours ago

    A task that might have taken five hours assisted by AI, and perhaps ten hours without it, is now more commonly taking seven or eight hours, or even longer.

    What kind of work do they do?

    in my role as CEO of Carrington Labs, a provider of predictive-analytics risk models for lenders. My team has a sandbox where we create, deploy, and run AI-generated code without a human in the loop. We use them to extract useful features for model construction, a natural-selection approach to feature development.

    I wonder what I have to imagine this is doing and how. How do they interface with the loop-without-a-human?

    Either way, they do seem to have a (small, narrow) systematic test case and the product variance to be useful at least anecdotally/for a sample case.