Lemmings, I was hoping you could help me sort this one out: LLM’s are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they’re hallucinating?

Disclaimer: I’m a full time senior dev using the shit out of LLM’s, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don’t see “AI” taking my job, because I think that LLM’s have already peaked, they’re just tweaking minor details now.

Please don’t ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA’s.

Please don’t kill me

  • Logical@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    14 hours ago

    I mostly agree with you, but I still don’t think it’s “worth the hype” even if you use it responsibly, since the hype is that it is somehow going to replace software devs (and other jobs), which is precisely what it can’t do. If you’re aware enough of its limitations to be using it as a productivity tool, as opposed to treating it as some kind of independent, thinking “expert”, then you’re already recognizing that it does not live up to anywhere near the hype that is being pushed by the big AI companies.