• we are all@crazypeople.online
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 day ago

    well we often equate predictions around AGI with ASI and a singularity event, which has been predicted for decades based on several aspects of computing over the years; advancing hardware, software, throughput and then of course neuroscience.

    ASI is more of a prediction of the capabilities where even imitating intelligence with enough presence will give rise to tangible, real higher intelligence after a few iterations, then doing so on its own then doing improvements. once those improvements are beyond human capability, we have our singularity.

    back to just AGI, it seems to be achievable based on mimicking the processing power of a human mind, which isn’t currently possible, but we are steadily working toward it and have achieved some measures of success. we may decide that certain aspects of artifical intelligence are reached prior to that, but IMO it feels like we’re only a few years away.

    • gbzm@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      1 day ago

      Alright. I had already seen that stuff and I’ve never seen really convincing arguments for these predictions beyond pretty sci-fi-esque speculation.
      I’m not at all convinced we have anything even remotely resembling “mimicking the processing power of a human mind”, either through material simulation of a complete brain and the multi sensorial interactions with an environment to let it grow into a functioning mind, or the party tricks we tend to call AI these days (which boil down to Chinese Rooms built with thousands of GPU’s worth of piecewise linear regressions, and that are unable to reason or even generalize beyond their training distributions according to the source).
      I guess embedding cultivated neurons on microchips could maybe make new things possible, but even then I wouldn’t be surprised if it turned out making a human-level intelligence ended up requiring building an actual whole ass human, or at least most of one. Seeing where we are with that stuff, I would rather surmise a time scale in the decades to centuries, if at all. Which could very well be longer than the time climate changes leaves us with the required levels of industry to even attempt it.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 day ago

        Can you think of a reason why we wouldn’t ever get there? We know it’s possible - our brains can do it. Our brains are made of matter, and so are computers.

        The timescale isn’t the important part - it’s the apparent inevitability of it.

        • gbzm@piefed.social
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          1 day ago

          I’ve given reasons. We can imagine Dyson Spheres, and we know it’s possible. It doesn’t mean we can actually build them or ever will be able to.

          The fact that our brains are able to do stuff that we don’t even know how they do doesn’t necessarily mean rocks can. If it somehow requires the complexity of biology, depending on how much of this complexity it requires it could just end up meaning creating a fully fledged human, which we can already do, and it hasn’t caused a singularity because creating a human costs resources even when we do it the natural way.

          • Perspectivist@feddit.uk
            link
            fedilink
            English
            arrow-up
            3
            ·
            22 hours ago

            I don’t see any reason to assume substrate dependence either, since we already have narrowly intelligent, non-biological systems that are superhuman within their specific domains. I’m not saying it’s inconceivable that there’s something uniquely mysterious about the biological brain that’s essential for true general intelligence - it just seems highly unlikely to me.

            • gbzm@piefed.social
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              18 hours ago

              The thing is I’m not assuming substrate dependence. I’m not saying there’s something uniquely mysterious about the biological brain, I’m saying what we know about “intelligence” right now is that it’s an emergent property observed in brains that have been in interaction with a physical and natural environment through complex sensory feedback loops, materialized by the rest of the human body. This is substrate independent, but the only thing that rocks can do for sure is simulate this whole system, and good simulations of complicated systems are not an easy feat at all, and it’s not at all certain that we ever be able to do it without it requiring too much resources for it to be worth the hassle.

              The things we’ve done that most closely resemble human intelligence in computers are very drastic oversimplifications of how biological brains work, sprinkled with mathematical translations of actual cognitive processes. And right now they appear very limited, even though a lot of resources - physical and economic - have been injected into them. We don’t understand how brains work enough to refine this simplification very well, and we don’t know much about the formation of cognitive processes relevant to “intelligence” either. Yet you assert it’s a certainty that we will, that we will encode it in computers, and that the result will have a bunch of properties of current software, easily copyable and editable (which the human-like intelligences we know are not at all), not requiring more power than is output by the Sun, (which humans don’t, but they’re completely different physical systems), etc.

              The same arguments you’re making could be made to say, in 1969 after the moon landing, that the human race will definitely colonize the whole solar system. We know it’s possible so it will happen at some point is not how technology works, it also needs to be profitable enough for enough industry to be injected in the problem to solve it, and for the result to live up to profitability expectations. Right now no AI firm is even remotely profitable, and the resources in the Kuiper belt or the real estate on Mars aren’t enough of an argument that our rockets can reach them, there’s no telling that they will ever be ; our economies might well simply lose interest before then.

              • Perspectivist@feddit.uk
                link
                fedilink
                English
                arrow-up
                1
                ·
                18 hours ago

                I’m not claiming that AGI will necessarily be practical or profitable by human standards - just that, given enough time and uninterrupted progress, it’s hard to see how it wouldn’t happen.

                The core of my argument isn’t about funding or feasibility in the short term, it’s about inevitability in the long term. Once you accept that intelligence is a physical process and that we’re capable of improving the systems that simulate it, the only thing that can stop us from reaching AGI eventually is extinction or total collapse.

                So, sure - maybe it’s not 10 years away. Maybe not 100. But if humanity keeps inventing, iterating, and surviving, I don’t see a natural stopping point before we get there.

                • gbzm@piefed.social
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  15 hours ago

                  I get it, the core of your argument is given enough time it will happen, which isn’t saying much: given infinite time anything will happen. Even extinction and total collapse aren’t enough, infinite time means a thinking computer will just emerge fully formed from quantum fluctuations.

                  But you’re voicing it as though it’s a certain direction of human technological progress which is frankly untrue. You’ve just concocted a scenario for technological progress in your head by extrapolating from the current state of it, and you present it as a certainty. But anyone can do the same for equally credible scenarios without AGI. For instance, if the only way to avoid total collapse is to stabilize energy consumption and demographic growth and we somehow manage it, then if making rocks think costs 10^20W and the entire world’s labour, then it will not ever happen in any meaningful sense of the word “ever”.

                  PS - to elaborate a bit on that “meaningful sense of the word ever” bit, I don’t want to nitpick but some time scales do make asteroid impacts irrelevant. The Sun will engulf the earth in about 5 billion years. Then there’s the heat death of the universe. In computing problems you get millions of years popping here and there for problems that feel like they should be easy

                  • Perspectivist@feddit.uk
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    14 hours ago

                    In my view, we’re heavily incentivized to develop AGI because of the enormous potential benefits - economic, scientific, and military. That’s exactly what worries me. We’re sprinting toward it without having solved the serious safety and control problems that would come with it.

                    I can accept that the LLM approach might be a dead end, or that building AGI could be far harder than we think. But to me, that doesn’t change the core issue. AGI represents a genuine civilization-level existential risk. Even if the odds of it going badly are small, the stakes are too high for that to be comforting.

                    Given enough time, I think we’ll get there - whether that’s in 2 years or 200. The timescale isn’t the problem; inevitability is. And frankly, I don’t think we’ll ever be ready for it. Some doors just shouldn’t be opened, no matter how curious or capable we become.

          • m532@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            What does replicating humans have to do with the singularity?

            I’d argue the industrial revolution was the singularity. And if it wasn’t that, it would be computers.