• Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    6
    ·
    1 day ago

    An asteroid impact not being imminent doesn’t really make me feel any better when the asteroid is still hurtling toward us. My concern about AGI has never been about the timescale - it’s the fact that we know it’s coming, and almost no one seems to take the repercussions seriously.

    • justOnePersistentKbinPlease@fedia.io
      link
      fedilink
      arrow-up
      41
      arrow-down
      3
      ·
      1 day ago

      LLMs are a dead end to AGI. They do not reason or understand in any way. They only mimic it.

      It is the same technology now as 20 years ago with the first chatbots. Just LLMs have models approaching a Trillion items instead of a few thousand.

        • justOnePersistentKbinPlease@fedia.io
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          24 hours ago

          They are the closest things to AI that we have. The so called LRMs fake their reasoning.

          They do not think or reason. We are at the very best decades away from anything resembling an AI.

          The best LLMs can do is a mass effect(1) VI and that is still more than a decade away

          • Perspectivist@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            ·
            19 hours ago

            The chess opponent on Atari is AI - we’ve had AI systems for decades.

            An asteroid impact being decades away doesn’t make it any less concerning. My worries about AGI aren’t about the timescale, but about its inevitability.

            • Sconrad122@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 hours ago

              Decades is plenty of time for society to experience a collapse or major setback that prevents AGI from being discovered in the lifetime of any currently alive human. Whether that comes from war, famine, or natural phenomena induced by man-made climate change, we have plenty of opportunities as a species to take the offramp and never “discover” AGI. This comment is brought to you by optimistic existentialism

      • m532@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        No, the first chatbots didn’t have neural networks inside. They didn’t have intelligence.

        • booty [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          24 hours ago

          LLMs aren’t intelligence. We’ve had similar technology in more primitive forms for a long time, like Markov chains. LLMs are hyper specialized at passing a turing test but are not good at basically anything else.

    • gbzm@piefed.social
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 day ago

      At the risk of sounding like I’ve been living under a rock, how do we know it’s coming, exactly?

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        9 hours ago

        Intelligence is possible, as proven by the existence of it in the Biological world.

        So it makes sense that as Technology evolves we become able to emulate the Biological World in that, just as we have in so many other things, from flight to artificial hearths.

        However, there is no guarantee that Mankind will not go extinct before that point is reached, nor there is any guarantee that our Technological progression won’t come to an end (though at the moment we’re near a peak period in terms of speed of Technological progression), so it is indeed true that we don’t know it’s coming: we as a species might not be around long enough to make it come or we might high a ceiling in our Technological development before our technology is capable of creating AGI.

        Beyond the “maybe one day” view, personally I think that believing that AGI is close is complete total pie in the sky fantasism: this supposed path to it that were LLMs turned out to be a dead end that was decorated with a lot of bullshit to make it seem otherwise, what the technology underlying it does really well - pattern recognition and reproduction - has turned out to not be enough by itself to add up to intelligence and we don’t actually have any specific technological direction in the pipeline (that I know of) which can crack that problem.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 day ago

        We’ll keep incrementally improving our technology, and unless we - or some outside force - destroy us first, we’ll get there eventually.

        We already know that general intelligence is possible, because humans are generally intelligent. There’s no reason to assume that what our brains do couldn’t be replicated artificially.

        At some point, unless something stops us, we’ll create an artificially intelligent system that’s as intelligent as we are. From that moment on, we’re no longer needed to improve it further - it will make a better version of itself, which will make an even better version, and so on. Eventually, we’ll find ourselves in the presence of something vastly more intelligent than us - and the idea of “outsmarting” it becomes completely incoherent. That’s an insanely dangerous place for humanity to end up in.

        We’re growing a tiger puppy. It’s still small and cute today but it’s only a matter of time untill it gets big and strong.

        • gbzm@piefed.social
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          1 day ago

          What if human levels of intelligence requires building something that is so close in its mechanisms to a human brain that it’s indistinguishable from a brain, or a complete physical and chemical simulation of a brain? What if the input-output “training” required to make it work in any comprehensible way is so close in fullness and complexity to the human sensory perception system interacting with the world, that it ends up being indistinguishable from a human body or a complete physical simulation of a body, with its whole environment?

          There’s no reason to assume our brains or their mechanisms can’t be replicated artificially, but there’s also no reason to assume it can be made practical, or that because we can make it it can self-replicate at no cost in terms of material resources, or refine its own formula. Humans have human-level intelligence, and they’ve never successfully created anything as intelligent as them.

          I’m not saying it won’t happen, mind you, I’m just saying it’s not a certainty. Plenty of things are impossible, or sufficiently impractical that humans - or any species - may never create it.

          • thevoidzero@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            This is what I think might be more reasonable to do. Even with a very strong capabilities of reason, I think we might have to train the AGI like how we train children. It’ll take time as they interact through the environment not just read a bunch of data on the internet that comes from a various sources and might not lead into a coherent direction on how someone should live their life, or act.

            This way might make better AGI that are actually closer to human in variations on how they act compared to rapid training on the same data. Because having the diversity of thoughts and discussions are what leads into better outcomes in many situations.

          • m532@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            This is like that “only planets that are 100% exactly like earth can create life, because the only life we know is on earth” backward reasoning

      • we are all@crazypeople.online
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 day ago

        well we often equate predictions around AGI with ASI and a singularity event, which has been predicted for decades based on several aspects of computing over the years; advancing hardware, software, throughput and then of course neuroscience.

        ASI is more of a prediction of the capabilities where even imitating intelligence with enough presence will give rise to tangible, real higher intelligence after a few iterations, then doing so on its own then doing improvements. once those improvements are beyond human capability, we have our singularity.

        back to just AGI, it seems to be achievable based on mimicking the processing power of a human mind, which isn’t currently possible, but we are steadily working toward it and have achieved some measures of success. we may decide that certain aspects of artifical intelligence are reached prior to that, but IMO it feels like we’re only a few years away.

        • gbzm@piefed.social
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          1 day ago

          Alright. I had already seen that stuff and I’ve never seen really convincing arguments for these predictions beyond pretty sci-fi-esque speculation.
          I’m not at all convinced we have anything even remotely resembling “mimicking the processing power of a human mind”, either through material simulation of a complete brain and the multi sensorial interactions with an environment to let it grow into a functioning mind, or the party tricks we tend to call AI these days (which boil down to Chinese Rooms built with thousands of GPU’s worth of piecewise linear regressions, and that are unable to reason or even generalize beyond their training distributions according to the source).
          I guess embedding cultivated neurons on microchips could maybe make new things possible, but even then I wouldn’t be surprised if it turned out making a human-level intelligence ended up requiring building an actual whole ass human, or at least most of one. Seeing where we are with that stuff, I would rather surmise a time scale in the decades to centuries, if at all. Which could very well be longer than the time climate changes leaves us with the required levels of industry to even attempt it.

          • Perspectivist@feddit.uk
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 day ago

            Can you think of a reason why we wouldn’t ever get there? We know it’s possible - our brains can do it. Our brains are made of matter, and so are computers.

            The timescale isn’t the important part - it’s the apparent inevitability of it.

            • gbzm@piefed.social
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              edit-2
              1 day ago

              I’ve given reasons. We can imagine Dyson Spheres, and we know it’s possible. It doesn’t mean we can actually build them or ever will be able to.

              The fact that our brains are able to do stuff that we don’t even know how they do doesn’t necessarily mean rocks can. If it somehow requires the complexity of biology, depending on how much of this complexity it requires it could just end up meaning creating a fully fledged human, which we can already do, and it hasn’t caused a singularity because creating a human costs resources even when we do it the natural way.

              • Perspectivist@feddit.uk
                link
                fedilink
                English
                arrow-up
                3
                ·
                19 hours ago

                I don’t see any reason to assume substrate dependence either, since we already have narrowly intelligent, non-biological systems that are superhuman within their specific domains. I’m not saying it’s inconceivable that there’s something uniquely mysterious about the biological brain that’s essential for true general intelligence - it just seems highly unlikely to me.

                • gbzm@piefed.social
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  15 hours ago

                  The thing is I’m not assuming substrate dependence. I’m not saying there’s something uniquely mysterious about the biological brain, I’m saying what we know about “intelligence” right now is that it’s an emergent property observed in brains that have been in interaction with a physical and natural environment through complex sensory feedback loops, materialized by the rest of the human body. This is substrate independent, but the only thing that rocks can do for sure is simulate this whole system, and good simulations of complicated systems are not an easy feat at all, and it’s not at all certain that we ever be able to do it without it requiring too much resources for it to be worth the hassle.

                  The things we’ve done that most closely resemble human intelligence in computers are very drastic oversimplifications of how biological brains work, sprinkled with mathematical translations of actual cognitive processes. And right now they appear very limited, even though a lot of resources - physical and economic - have been injected into them. We don’t understand how brains work enough to refine this simplification very well, and we don’t know much about the formation of cognitive processes relevant to “intelligence” either. Yet you assert it’s a certainty that we will, that we will encode it in computers, and that the result will have a bunch of properties of current software, easily copyable and editable (which the human-like intelligences we know are not at all), not requiring more power than is output by the Sun, (which humans don’t, but they’re completely different physical systems), etc.

                  The same arguments you’re making could be made to say, in 1969 after the moon landing, that the human race will definitely colonize the whole solar system. We know it’s possible so it will happen at some point is not how technology works, it also needs to be profitable enough for enough industry to be injected in the problem to solve it, and for the result to live up to profitability expectations. Right now no AI firm is even remotely profitable, and the resources in the Kuiper belt or the real estate on Mars aren’t enough of an argument that our rockets can reach them, there’s no telling that they will ever be ; our economies might well simply lose interest before then.

                  • Perspectivist@feddit.uk
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    15 hours ago

                    I’m not claiming that AGI will necessarily be practical or profitable by human standards - just that, given enough time and uninterrupted progress, it’s hard to see how it wouldn’t happen.

                    The core of my argument isn’t about funding or feasibility in the short term, it’s about inevitability in the long term. Once you accept that intelligence is a physical process and that we’re capable of improving the systems that simulate it, the only thing that can stop us from reaching AGI eventually is extinction or total collapse.

                    So, sure - maybe it’s not 10 years away. Maybe not 100. But if humanity keeps inventing, iterating, and surviving, I don’t see a natural stopping point before we get there.

              • m532@lemmygrad.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                What does replicating humans have to do with the singularity?

                I’d argue the industrial revolution was the singularity. And if it wasn’t that, it would be computers.

    • FortifiedAttack [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 day ago

      Except there is no such asteroid and techbros have driven themselves into a frenzy over a phantom.

      The real threat to humanity is runaway climate change, which techbros conveniently don’t give a single fuck about, since they use gigawatts of power to train bigger and bigger models with further and further diminishing returns.

    • Lugh@futurology.todayM
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      Yes, and there is also the possibility that it could be upon us quite suddenly. It may just take one fundamental breakthrough to make the leap from what we have currently to AGI, and once that breakthrough is achieved, AGI could arrive quite quickly. It may not be a linear process of improvement, where we reach the summit in many years.

    • NuraShiny [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 day ago

      Do we know it’s coming? by what evidence? I don’t see it.

      Far as I can tell, we are more likely to discover how to genetically uplift other life to intelligence then we are to making computers actually think.

          • Perspectivist@feddit.uk
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            We’ll keep incrementally improving our technology, and unless we - or some outside force - destroy us first, we’ll get there eventually.

            We already know that general intelligence is possible, because humans are generally intelligent. There’s no reason to assume that what our brains do couldn’t be replicated artificially.

            At some point, unless something stops us, we’ll create an artificially intelligent system that’s as intelligent as we are. From that moment on, we’re no longer needed to improve it further - it will make a better version of itself, which will make an even better version, and so on. Eventually, we’ll find ourselves in the presence of something vastly more intelligent than us - and the idea of “outsmarting” it becomes completely incoherent. That’s an insanely dangerous place for humanity to end up in.

            We’re growing a tiger puppy. It’s still small and cute today but it’s only a matter of time untill it gets big and strong.

            • NuraShiny [any]@hexbear.net
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              1 day ago

              There are limits to technology. Why would we assume infinite growth of technology, when nothing else we have is infinite? It’s not like the wheel is getting more round over time. We made it out of better materials, but it still has limits to it’s utility. All our computers are computing 1s and 0s and adding more of those per seocnd does not seem to do anything to make them smarter.

              I would worry about ecological collapse a lot more then this that’s for sure. That’s something that current shitty non-smart AI can achieve if they keep making data centers and drinking ourwater.

              • Perspectivist@feddit.uk
                link
                fedilink
                English
                arrow-up
                2
                ·
                19 hours ago

                I don’t see any reason to assume humans are anywhere near the far end of the intelligence spectrum. We already have narrow-intelligence systems that are superhuman in specific domains. I don’t think comparing intelligence to something like a wheel is fair - there are clear geometric limits to how round a wheel can be, but I’ve yet to hear any comparable explanation for why similar limits should exist for intelligence. It doesn’t need to be infinitely intelligent either - just significantly more so than we are.

                Also, as I said earlier - unless some other catastrophe destroys us before we get there. That doesn’t conflict with what I said, nor does it give me any peace of mind. It’s simply my personal view that AGI or ASI is the number one existential risk we face.

                • NuraShiny [any]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 hours ago

                  Okay, granted. But if we are on the stupid side of the equation, why would we be able to make something smarter then us? One does not follow from the other.

                  I also disagree that we have made anything that is actually intelligent. A computer can do math billions of times faster then a human can, but doing math is not smarts. Without human intervention and human input, the computer would just idle and do nothing. That is not intelligence. At no point has code shown the ability to self-improve and grow and the current brand of shitAI is no different. They call what they do to it training, but it’s really just telling it how to weigh the reams of data it’s eating and without humans, it would not do even that.

                  Ravens and Octopi can solve quite complex puzzles. Are they intelligent? What is even the cutoff for intelligence? We don’t even have a good definition for what intelligence is that encompasses everything. People cite IQ, which is obviously bunk. People try to section it into several types of intelligence, social, logical and so on. If we don’t even know what the objective definition of intelligence is, I am not worried about us creating it from whole cloth.

              • m532@lemmygrad.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                Technology is knowledge, and we are millions of years away from reaching the end of possible knowledge.

                Also, humans already exist, so we know its possible.