• 2pt_perversion@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    18 days ago

    There is this seeming need to discredit AI from some people that goes overboard. Some friends and family who have never really used LLMs outside of Google search feel compelled to tell me how bad it is.

    But generative AIs are really good at tasks I wouldn’t have imagined a computer doing just a few year ago. Even if they plateaued in place where they are right now it would lead to major shakeups in humanity’s current workflow. It’s not just hype.

    The part that is over hyped is companies trying to jump the gun and wholesale replace workers with unproven AI substitutes. And of course the companies who try to shove AI where it doesn’t really fit, like AI enabled fridges and toasters.

    • sudneo@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 days ago

      Even if they plateaued in place where they are right now it would lead to major shakeups in humanity’s current workflow

      Like which one? Because it’s now 2 years we have chatGPT and already quite a lot of (good?) models. Which shakeup do you think is happening or going to happen?

      • locuester@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        18 days ago

        Computer programming has radically changed. Huge help having llm auto complete and chat built in. IDEs like Cursor and Windsurf.

        I’ve been a developer for 35 years. This is shaking it up as much as the internet did.

        • sudneo@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          18 days ago

          I hardly see it changed to be honest. I work in the field too and I can imagine LLMs being good at producing decent boilerplate straight out of documentation, but nothing more complex than that.

          I often use LLMs to work on my personal projects and - for example - often Claude or ChatGPT 4o spit out programs that don’t compile, use inexistent functions, are bloated etc. Possibly for languages with more training (like Python) they do better, but I can’t see it as a “radical change” and more like a well configured snippet plugin and auto complete feature.

          LLMs can’t count, can’t analyze novel problems (by definition) and provide innovative solutions…why would they radically change programming?

          • areyouevenreal@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            18 days ago

            ChatGPT 4o isn’t even the most advanced model, yet I have seen it do things you say it can’t. Maybe work on your prompting.

            • sudneo@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              18 days ago

              That is my experience, it’s generally quite decent for small and simple stuff (as I said, distillation of documentation). I use it for rust, where I am sure the training material was much smaller than other languages. It’s not a matter a prompting though, it’s not my prompt that makes it hallucinate functions that don’t exist in libraries or make it write code that doesn’t compile, it’s a feature of the technology itself.

              GPTs are statistical text generators after all, they don’t “understand” the problem.

              • agamemnonymous@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                18 days ago

                It’s also pretty young, human toddlers hallucinate and make things up. Adults too. Even experts are known to fall prey to bias and misconception.

                I don’t think we know nearly enough about the actual architecture of human intelligence to start asserting an understanding of “understanding”. I think it’s a bit foolish to claim with certainty that LLMs in a MoE framework with self-review fundamentally can’t get there. Unless you can show me, materially, how human “understanding” functions, we’re just speculating on an immature technology.

                • sudneo@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  18 days ago

                  As much as I agree with you, humans can learn a bunch of stuff without first learning the content of the whole internet and without the computing power of a datacenter or consuming the energy of Belgium. Humans learn to count at an early age too, for example.

                  I would say that the burden of proof is therefore reversed. Unless you demonstrate that this technology doesn’t have the natural and inherent limits that statistical text generators (or pixel) have, we can assume that our mind works differently.

                  Also you say immature technology but this technology is not fundamentally (I.e. in terms of principle) different from what Weizenabum’s ELIZA in the '60s. We might have refined model and thrown a ton of data and computing power at it, but we are still talking of programs that use similar principles.

                  So yeah, we don’t understand human intelligence but we can appreciate certain features that absolutely lack on GPTs, like a concept of truth that for humans is natural.

        • areyouevenreal@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          18 days ago

          Exactly this. Things have already changed and are changing as more and more people learn how and where to use these technologies. I have seen even teachers use this stuff who have limited grasp of technology in general.

    • Modern_medicine_isnt@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 days ago

      See now, I would prefer AI in my toaster. It should be able to learn to adjust the cook time to what I want no matter what type of bread I put in it. Though is that realky AI? It could be. Same with my fridge. Learn what gets used and what doesn’t. Then give my wife the numbers on that damn clear box of salad she buys at costco everytime, which take up a ton of space and always goes bad before she eats even 5% of it. These would be practical benefits to the crap that is day to day life. And far more impactful then search results I can’t trust.

      • ssfckdt@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        18 days ago

        There’s a good point here that like about 80% of what we’re calling AI right now… isn’t even AI or even LLM. It’s just… algorithm, code, plain old math. I’m pretty sure someone is going to refer to a calculator as AI soon. “Wow, it knows math! Just like a person! Amazing technology!”

        (That’s putting aside the very question of whether LLMs should even qualify as AIs at all.)

        • Modern_medicine_isnt@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          18 days ago

          In my professional experience, AI seems to be just a faster way to generate an algorithm that is really hard to debug. Though I am dev-ops/sre so I am not as deep in it as the devs.

          • ssfckdt@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 days ago

            I remined of the time researchers used an evolutionary algorithm to devise a circuit that would emit a tone on certain audio inputs and not on others. They examined the resulting circuit and found an extra vestigial bit, but when they cut it off, the chip stopped working. So they re-enabled it. Then they wanted to show off their research at a panel, and at the panel it completely failed. Dismayed they brought it back to their lab to figure out why it stopped working, and it suddenly started working fine.

            After a LOT of troubleshooting they eventually discovered that the circuit was generating the tone by using the extra vestigial bit as an antenna that picked up emissions from a CRT in the lab and downconverted it to the desired tone frequency. Turn of the antenna, no signal. Take the chip away from that CRT, no signal.

            That’s what I expect LLMs will make. Complex, arcane spaghetti stuff that works but if you look at it funny it won’t work anymore, and nobody knows how it works at all.

  • nroth@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 days ago

    “Built to do my art and writing so I can do my laundry and dishes” – Embodied agents is where the real value is. The chatbots are just fancy tech demos that folks started selling because people were buying.

    • nroth@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 days ago

      Though the image generators are actually good. The visual arts will never be the same after this

      • LifeInMultipleChoice@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        18 days ago

        Compare it to the microwave. Is it good at something, yes. But if you shoot your fucking turkey in it at Thanksgiving and expect good results, you’re ignorant of how it works. Most people are expecting language models to do shit that aren’t meant to. Most of it isn’t new technology but old tech that people slapped a label on as well. I wasn’t playing Soul Caliber on the Dreamcast against AI openents… Yet now they are called AI opponents with no requirements to be different. GoldenEye on N64 was man VS AI. Madden 1995… AI. “Where did this AI boom come from!”

        Marketing and mislabeling. Online classes, call it AI. Photo editors, call it AI.

      • TheBrideWoreCrimson@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        17 days ago

        I’ve been thinking about this a lot recently. No, we’re not there yet, may never be. Compare what Jesar, one of my favorite artists, can do - and that was in the oh-so-long-ago 2000s - and what an AI can do. It’s simply not up to the task. I do use AI a lot to create what is basically utility art. But it depends on pre-defined textual or visual inputs whereas only an artist can have divine inspiration. AI is more of a sterile tool, like interactive clipart, if you will.

    • bradd@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      18 days ago

      Eh, my best coworker is an LLM. Full of shit, like the rest of them, but always available and willing to help out.

  • computerscientistII@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 days ago

    I saved a lot of time due to ChatGPT. Need to sign up some of my pupils for a competition by uploading their data in a csv-File to some plattform? Just copy and paste their data into chsatgpt and prompt it to create the file. The boss (headmaster) wants some reasoning why I need some paid time for certain projects? Let ChatGPT do the reasoning. Need some exercises for one of my classes that doesn’t really come to grips with while-loops? let ChatGPT create those exercises (some smartasses will of course have ChatGPT then solve those exercises). The list goes on…

  • UraniumBlazer@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 days ago

    I have no idea how people can consider this to be a hype bubble especially after the o3 release. It smashed the ARC AGI benchmark on the performance front. It ranks as the 175th best competitive coder in the world on Codeforces’ leaderboard.

    o3 proved that it is possible to have at least an expert AGI if not a Virtuoso AGI (according to Deep mind’s definition of AGI). Sure, it’s not economical yet. But it will get there very soon (just like how the earlier GPTs were a lot dumber and took a lot more energy than the newer, smaller parameter models).

    Please remember - fight to seize the means of production. Do not fight the means of production themselves.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      18 days ago

      Unless we invent cold fusion between the next 5 years, they will never be economical. They are the most energy inefficient thing ever invented by humanity and all prediction models state that it will cost more energy, not less, to keep making them better. They will never be energy efficient nor economical in their current state, and most companies are out of ideas on how to shake it up. Even the people who created generative models agree that they have just been brute forcing by making the models larger with more energy consumption. When you try to make them smaller or more energy efficient, they fall off the performance cliff and only produce garbage. I’m sure there are researchers doing cool stuff, but it is neither economical nor efficient.