Lemmings, I was hoping you could help me sort this one out: LLM’s are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they’re hallucinating?

Disclaimer: I’m a full time senior dev using the shit out of LLM’s, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don’t see “AI” taking my job, because I think that LLM’s have already peaked, they’re just tweaking minor details now.

Please don’t ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA’s.

Please don’t kill me

  • ulterno@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    Well, it did let me make fake SQL queries out of the JSON query I gave it, without me having to learn SQL.
    Of course, I didn’t actually use the query in the code, just added it in a comment for a function, to give an idea to those that didn’t know JSON queries, of what the function did.

    I treat it for what it is. A “language” model.
    It does language, not logic. So I don’t try to make it do logic.

    There were a few times I considered using it for code completion for things that are close to copy paste, but not close enough that it could be done via bash. For that, I wished I had some clang endpoint that I could then use to get a tokenised representation of code, to then script with.
    But then I just made a little C program that did 90% of the job and then I did the remaining 10% manually. And it was 100% deterministic, so I didn’t have to proof-read the generated code.