• dukemirage@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    29
    ·
    13 hours ago

    If you want to ditch every software company/vendor that uses LLM code tools, you may want to never touch software ever again.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      4
      ·
      13 hours ago

      Doomposting about AI inevitability is only beneficial to AI companies… If your claim is even true. And if it is, we should shame everybody else.

        • Goodeye8@piefed.social
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          10 hours ago

          None of what you brought up as a positive are things an LLM does. Most of those things existed before the modern transformer-based LLMs were even a thing.

          LLM-s are glorified text prediction engines and nothing about their nature makes them excel at formal languages. It doesn’t know any rules. It doesn’t have any internal logic. For example if the training data consistently exhibits the same flawed piece of code then an LLM will spit out the same flawed piece of code, because that’s the most likely continuation of its current “train of thought”. You would have to fine-tune the model around all those flaws and then hope some combination of a prompt won’t lead the model back into that flawed data.

          I’ve used LLMs to generate SQL, which according to you is something they should excel at, and I’ve had to fix literal syntax errors that would prevent the statement from executing. A regular SQL linter would instantly pick up that the SQL is wrong but an LLM can’t pick up those errors because an LLM does not understand the syntax.

          • False@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            9 hours ago

            I’ve seen humans generate code with syntax errors, try to run it, then fix it. I’ve seen llms do the same stuff - it does that faster than the human though

            • Goodeye8@piefed.social
              link
              fedilink
              English
              arrow-up
              4
              ·
              8 hours ago

              But that extra time is then wasted because humans still have to review the code an LLM generates and fix all the other logical errors it makes because at best an LLM does exactly what you tell them to do. I’ve worked with a developer who did exactly what the ticket says and nothing more and it was a pain in the ass because their code always needed double checking that their narrow focus on a very specific problem didn’t break the domain as a whole. I don’t think you’re gaining any productivity with LLMs, you’re only shifting the work from writing code to reviewing code and I’ve yet to meet a developer who enjoys reviewing code more than writing code, which means code will receive less attention and thus becomes more prone to bugs.

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          5
          ·
          edit-2
          12 hours ago

          Citation needed.

          You’re on a post about Linux, an OS that’s grown in popularity thanks to Microsoft ruining Windows with the “true aids” you’re promoting here.

          • dukemirage@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            8
            ·
            12 hours ago

            Whatever MS bakes into Windows is not what I listed above. Spin up a local LLM trained on your code base and try using it.

            • XLE@piefed.social
              link
              fedilink
              English
              arrow-up
              14
              arrow-down
              14
              ·
              edit-2
              12 hours ago

              No thanks AI bro.

              I don’t buy your evidence-free praise of AI. And I don’t buy your No True Scotsman fallacy.

              • tjsauce@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                3
                ·
                10 hours ago

                Hey I’m against corporate AI too, but when anyone can create a very basic ML program that runs locally with public domain data, eventually something both useful and ethical will emerge. It’s good to be skeptical, but you don’t have to be an AI bro to see that some specific tools might meet or exceed your standards.

                I don’t like image or video generators, but the core tech is really useful for frame interpolation, a usecase that is not inherently controversial and badly needs improvement.

                Sorry to not-x-it’s-y, but it’s not about forcing the big tool into your workflow, it’s about finding the 1001 little tools that work every time and collecting them. Or, wait for these tools to be consolidated.

                If I seem naive, It’s cause I believe in reclaiming as much from tainted technology as possible.

                • XLE@piefed.social
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  ·
                  9 hours ago

                  Considering the GOG announced their AI usage to the world with an AI-generated image, and the technology currently cannot be remotely useful without being extremely unethical, I do not share your optimism.

                  There’s plenty of real technology that can be reclaimed right now, though! From textile machines to lithium ion battery technology, the world is your oyster.

              • dukemirage@lemmy.world
                link
                fedilink
                English
                arrow-up
                8
                arrow-down
                7
                ·
                12 hours ago

                Well I will not share a screencast where a local LLM helps with code completion on a private project. You talk like you’re a proficient developer, you can try that on your own. And where is the fallacy?

                • XLE@piefed.social
                  link
                  fedilink
                  English
                  arrow-up
                  9
                  arrow-down
                  8
                  ·
                  12 hours ago

                  We’ve got studies that show AI makes you feel more productive while you’re actually less productive. And all you’re offering is a feeling you feel. Get high on your own supply if you want, but don’t drag down good companies with your evangelism.

        • HarkMahlberg@kbin.earth
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          11 hours ago

          We had all of those things before AI and they worked just fine and didn’t require 50 Exowatts of electricity to run.

            • XLE@piefed.social
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              9 hours ago

              Hey Steven, how do you think they make those models?

              (As if you genuinely believe those are the ones GOG is using.)

              • stephen01king@piefed.zip
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 hours ago

                So you agree those models have already been made, and running them no longer require 50 exawatts of power, right? Not sure why you decide to change the context to training the models instead of running it like the other guy was claiming.

                (As if you genuinely believe those are the ones GOG is using.)

                I thought the context was changed to general use of LLM as a tool for programmers, not specifically about GOG? Can’t even double check it now because the mod removed the comment for some reason.

      • ampersandrew@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        4
        ·
        12 hours ago

        Would you have taken a moral stance against automated telephone switchboards or online shopping?

          • ampersandrew@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            6
            ·
            12 hours ago

            Both of those things put a lot of people out of work, but our economy adapted, and there was nothing to be gained by shaming the people embracing the technology that was clearly going to take over. I’m not convinced AI tools are that, but if they are, then nothing can stop it, and you’re shaming a bunch of people who have literally no choice.

                  • Luminous5481 [they/them]@anarchist.nexus
                    link
                    fedilink
                    English
                    arrow-up
                    6
                    arrow-down
                    2
                    ·
                    11 hours ago

                    When you say, “I’m just expressing an opinion, why are you doing the same”, then yes, it is denying them the same right. Don’t play games with me, we’re not the idiots you seem to think we are.

              • ampersandrew@lemmy.world
                link
                fedilink
                English
                arrow-up
                6
                arrow-down
                5
                ·
                11 hours ago

                I used an example of two technologies that were destructive and inevitable, now both definitely parts of your daily life, to show how silly it is take a stance against a technology like that. I don’t need to work at GOG for that to be the case. And to reiterate, AI might not be inevitable. If it’s not, this problem takes care of itself economically, and you don’t need to shame anyone.

                  • ampersandrew@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    5
                    arrow-down
                    5
                    ·
                    11 hours ago

                    I believe I did answer your question, though I’d disagree with the idea that I’m “defending” anything. There exists nuance between “pro AI” and “anti AI”.

            • 4am@lemmy.zip
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              10 hours ago

              Those things didn’t destroy communities, pollute the earth, wrestle personal computing away from the populace, use up all the drinking water in an area, and provide a near total and realtime panopticon of everyone, everywhere, at all times, while stealing all the collected works of said society in order to be built without penalty at a time when ordinary folks are ordered to pay hundreds of thousands of dollars because they posted a social media video of their kid dancing to a song that was playing on broadcast radio.

              But sure keep boiling in that pot because you don’t need to do all the boilerplate for your fucking Node project or whatever. Fucking frog.

              • tjsauce@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                9 hours ago

                You’re talking about the worst of AI, which I agree should be dismantled. There are many smaller projects that do not do the things you mentioned, and it’s possible to support those while shunning corporate AI.

              • ampersandrew@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 hours ago

                It is the role of government to regulate those problems, but you can’t uninvent a technology. As for me in my work, the most I can say is that I almost used AI once; a coworker did it for me before I could get to our company approved AI page. That, plus other companies mandating its usage (if it was really so great, it wouldn’t be difficult to convince anyone to use it) is why I’m not confident that it is one of those inevitable technologies. But if it is, being a dick to people about it is stupid.

      • dukemirage@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        7
        ·
        12 hours ago

        If you think every LLM tool is a product of an over valued tech bro company then what’s that say about you?

          • tjsauce@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 hours ago

            That’s only if the HR knew what they were talking about when crafting the listing. Not saying GOG will use AI for good, but we don’t know if the job will require something like ChatGPT or something in-house that isn’t like GPT.