Clinicallydepressedpoochie@lemmy.world to Showerthoughts@lemmy.world · edit-21 month agoIf AI was going to advance exponentially I'd have expected it to take off by now.message-squaremessage-square223fedilinkarrow-up1259arrow-down136
arrow-up1223arrow-down1message-squareIf AI was going to advance exponentially I'd have expected it to take off by now.Clinicallydepressedpoochie@lemmy.world to Showerthoughts@lemmy.world · edit-21 month agomessage-square223fedilink
minus-squarejustOnePersistentKbinPlease@fedia.iolinkfedilinkarrow-up59arrow-down5·1 month agoAnd the single biggest bottleneck is that none of the current AIs “think”. They. Are. Statistical. Engines.
minus-squarethemurphy@lemmy.mllinkfedilinkarrow-up6arrow-down1·1 month agoAnd it’s pretty great at it. AI’s greatest use case is not LLM and people treat it like that because it’s the only thing we can relate to. AI is so much better and many other tasks.
minus-squaremoonking@lemy.lollinkfedilinkarrow-up25arrow-down21·1 month agoHumans don’t actually think either, we’re just electricity jumping to nearby neural connections that formed based on repeated association. Add to that there’s no free will, and you start to see how “think” is a immeasurable metric.
minus-squareYesButActuallyMaybe@lemmy.calinkfedilinkarrow-up5arrow-down2·1 month agoMarkov chains with extra steps
minus-squareCaveman@lemmy.worldlinkfedilinkarrow-up3·30 days agoHow closely do you need to model a thought before it becomes the real thing?
minus-squarejustOnePersistentKbinPlease@fedia.iolinkfedilinkarrow-up3·30 days agoNeed it to not exponentially degrade when AI content is fed in. Need creativity to be more than random chance deviations from the statistically average result in a mostly stolen dataset taken from actual humans.
minus-squareXaphanos@lemmy.worldlinkfedilinkEnglisharrow-up4arrow-down2·1 month agoYou’re not going to get an argument from me.
minus-squaredaniskarma@lemmy.dbzer0.comlinkfedilinkarrow-up1arrow-down1·29 days agoMaybe we are statistical engines too. When I heard people talk they are also repeating the most common sentences that they heard elsewhere anyway.
And the single biggest bottleneck is that none of the current AIs “think”.
They. Are. Statistical. Engines.
Same
And it’s pretty great at it.
AI’s greatest use case is not LLM and people treat it like that because it’s the only thing we can relate to.
AI is so much better and many other tasks.
Humans don’t actually think either, we’re just electricity jumping to nearby neural connections that formed based on repeated association. Add to that there’s no free will, and you start to see how “think” is a immeasurable metric.
Markov chains with extra steps
How closely do you need to model a thought before it becomes the real thing?
Need it to not exponentially degrade when AI content is fed in.
Need creativity to be more than random chance deviations from the statistically average result in a mostly stolen dataset taken from actual humans.
You’re not going to get an argument from me.
Maybe we are statistical engines too.
When I heard people talk they are also repeating the most common sentences that they heard elsewhere anyway.