• Technus@lemmy.zip
    link
    fedilink
    arrow-up
    123
    arrow-down
    6
    ·
    2 days ago

    I’ve maintained for a while that LLMs don’t make you a more productive programmer, they just let you write bad code faster.

    90% of the job isn’t writing code anyway. Once I know what code I wanna write, banging it out is just pure catharsis.

    Glad to see there’s other programmers out there who actually take pride in their work.

    • Dr. Wesker@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      8
      ·
      edit-2
      2 days ago

      It’s been my experience that the quality of code is greatly influenced by the quality of your project instructions file, and your prompt. And of course what model you’re using.

      I am not necessarily a proponent of AI, I just found myself being reassigned to a team that manages AI for developer use. Part of my responsibilities has been to research how to successfully and productively use the tech.

      • Technus@lemmy.zip
        link
        fedilink
        arrow-up
        29
        ·
        2 days ago

        But at a certain point, it seems like you spend more time babysitting and spoon-feeding the LLM than you do writing productive code.

        There’s a lot of busywork that I could see it being good for, like if you’re asked to generate 100 test cases for an API with a bunch of tiny variations, but that kind of work is inherently low value. And in most cases you’re probably better off using a tool designed for the job, like a fuzzer.

        • Dr. Wesker@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          4
          ·
          edit-2
          2 days ago

          But at a certain point, it seems like you spend more time babysitting and spoon-feeding the LLM than you do writing productive code.

          I’ve found it pretty effective to not babysit, but instead have the model iterate on it’s instructions file. If it did something wrong or unexpected, I explain what I wanted it to do, and ask it to update it’s project instructions to avoid the pitfall in future. It’s more akin to calm and positive reinforcement.

          Obviously YMMV. I am in charge of a large codebase of python cron automations, that interact with a handful of services and APIs. I’ve rolled a ~600 line instructions file, that has allowed me to pretty successfully use Claude to stand up from scratch full object-oriented clients, complete with dep injection, schema and contract data models, unit tests, etc.

          I do end up having to make stylistic tweaks, and sometimes reinforce things like DRY, but I actually enjoy that part.

          EDIT: Whenever I begin to feel like I’m babysitting, it’s usually due to context pollution and the best course is to start a fresh agent session.

    • Cyberflunk@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      14
      ·
      23 hours ago

      your experience isnt other peoples experience. just because you can’t get results doesnt mean the trchnology is invalid, just your use of it.

      “skill issue” as the youngers say

      • Feyd@programming.dev
        link
        fedilink
        arrow-up
        10
        arrow-down
        1
        ·
        19 hours ago

        It’s interesting that all the devs I already respected don’t use it or use it very sparingly and many of the devs I least respected sing it’s praises incessantly. Seems to me like “skill issue” is what leads to thinking this garbage is useful.

        • FizzyOrange@programming.dev
          link
          fedilink
          arrow-up
          2
          arrow-down
          4
          ·
          14 hours ago

          Everyone is talking past each other because there are so many different ways of using AI and so many things you can use it for. It works ok for some, it fails miserably for others.

          Lots of people only see one half of that and conclude “it’s shit” or “it’s amazing” based on an incomplete picture.

          The devs you respect probably aren’t working on crud apps and landing pages and little hacky Python scripts. They’re probably writing compilers and game engines or whatever. So of course it isn’t as useful for them.

          That doesn’t mean it doesn’t work for people mocking up a website or whatever.

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        22 hours ago

        I’d rather hone my skills at writing better, more intelligible code than spend that same time learning how to make LLMs output slightly less shit code.

        Whenever we don’t actively use and train our skills, they will inevitably atrophy. Something I think about quite often on this topic is Plato’s argument against writing. His view is that writing things down is “a recipe not for memory, but for reminder”, leading to a reduction in one’s capacity for recall and thinking. I don’t disagree with this, but where I differ is that I find it a worthwhile tradeoff when accounting for all the ways that writing increases my mental capacities.

        For me, weighing the tradeoff is the most important gauge of whether a given tool is worthwhile or not. And personally, using an LLM for coding is not worth it when considering what I gain Vs lose from prioritising that over growing my existing skills and knowledge