• rem26_art@fedia.io
    link
    fedilink
    arrow-up
    35
    arrow-down
    1
    ·
    21 hours ago

    “let me spread some fear and interest in my product so I can keep getting people to invest in my company or else we’re out of cash”

  • minorkeys@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    17 hours ago

    When your innovation is mass manipulation, I don’t think it should allowed. Maybe that’s just me.

  • WanderingThoughts@europe.pub
    link
    fedilink
    English
    arrow-up
    10
    ·
    18 hours ago

    Meanwhile in the EU, AI companies are screeching about regulations that restrict AI and their risks because it hinders the innovation. Pick a lane, dude.

  • AmbitiousProcess (they/them)@piefed.social
    link
    fedilink
    English
    arrow-up
    19
    ·
    21 hours ago

    “Humanity needs to see how awesome and powerful and simultaneously extremely dangerous AI is” says guy who’s business is based on building AI models and developing “safety” tools for them.

  • OpenStars@piefed.social
    link
    fedilink
    English
    arrow-up
    23
    ·
    22 hours ago

    Humanity is about to be handed almost unimaginable power,

    Btw, the minimum safe distance to stand from a nuclear destination is six feet. Also, it’s safe to microwave aluminum, and to eat razor blades in your pizza.

    Hrm, somehow what this guy is saying does not match my experiences with AI… oh well, I’m sure it is I who must be wrong. After all, he says:

    AI is doing 90% of the computer programming to build Anthropic’s products, including its own AI

    so how could that possibly go wrong?

    • arcterus@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      11
      ·
      21 hours ago

      AI is doing 90% of the computer programming to build Anthropic’s products, including its own AI

      This explains a lot tbh.

  • iltoroargento@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    18
    ·
    22 hours ago

    For a second, I was optimistic and thought they were going to highlight the economic, environmental, and misinformation concerns with the ensloppification engine. Then, I read the tagline…

    • photonic_sorcerer@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      12 hours ago

      “It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves,” Amodei warns after the passage about authoritarian governments. “AI companies control large datacenters, train frontier models, have the greatest expertise on how to use those models, and in some cases have daily contact with and the possibility of influence over tens or hundreds of millions of users. … [T]hey could, for example, use their AI products to brainwash their massive consumer user base, and the public should be alert to the risk this represents. I think the governance of AI companies deserves a lot of scrutiny.”

    • pulsewidth@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      Claude is AMAZING. The future is now! Its actually very hard to trip it up with complex trick questions like “Spell ‘blueberry’” nowadays. Check out what it said when I asked it just now what tomorrow’s date is (I even gave this doctorate degree-level intellect a clue that it’s in 2026).

      12966

      I asked it to show its reasoning so I could help it to understand my complex query, and got this very insightful reasoning process.

      12968

      Reasoning…

      Wearegiven:“Whatdateof2026isittomorrow?“Note:Thecurrentyearis2023,sotheuserisaskingaboutafuturedaterelativetoanunknown"today"in2026.However,withoutknowingthecurrentdate(in2026)wecannotcompute"tomorrow”.Thus,wemustclarifythattheanswerdependsonthecurrentdate.Butnote:Theusermightbeaskingthisquestiononaspecificday,sowecanassumethatthe"today"inthequestionisthedaywhenthequestionisbeingasked?However,thequestionisabout2026,whichisinthefuture.Sincewedon’thaveaspecificreferencedatein2026,wecannotgiveaspecificanswer.Therefore,thethoughtprocessshouldleadtoaresponsethatexplainsthedependencyandperhapsprovidesawaytocomputeitifthecurrentdatewereknown.Let’sbreakdown:1.Thequestion:“Whatdateof2026isittomorrow?”-Thisimpliesthattomorrowisadayintheyear2026.2.However,thecurrentdate(the"today"fromwhichwearecountingtomorrow)isnotprovided.Withoutknowingtoday’sdatein2026,wecannotdeterminetomorrow’sdate.3.Possibilities:a)Iftheuserisaskingthisquestionin2026,thenwecouldusethecurrentdate.Butthecurrentyearis2023,sowearenotin2026.b)Theusermightbeaskinghypothetically,butwithoutareference,wecannotcompute.4.Therefore,theanswermustbeconditionalonknowingthecurrentdate.5.Wecanprovideageneralmethod:-Tomorrow’sdatein2026isonedayafterthecurrentdate(whichmustbein2026).-Forexample,iftodayisJanuary1,2026,thentomorrowisJanuary2,2026.6.Butnote:Thequestionsays"dateof2026”,soweareonlyconcernedwithdatesintheyear2026.IftodayisDecember31,2026,thentomorrowwouldbeJanuary1,2027,whichisnotin2026.Sothequestionimpliesthattomorrowmustalsobein2026.7.Therefore,theanswerisonlydefinedfordatesin2026fromJanuary1toDecember30(becauseonDecember31,tomorrowwouldbein2027).SowemustalsonotethatifthecurrentdateisDecember31,2026,thentomorrowisnotin2026.8.Approachfortheresponse:-Explainthatwithoutknowingthecurrentdatein2026,wecannotgiveaspecificanswer.-Providethegeneralrule:tomorrow’sdateisthenextdayinthecalendarafterthecurrentdate.-NotethatthecurrentdatemustbebetweenJanuary1,2026andDecember30,2026fortomorrowtoalsobein2026.-Iftheuserhasaspecificcurrentdateinmind,theycanprovideitandwecancompute.9.However,notethattheusermightbeaskingthisonaparticularday(say,thedaytheyareaskingthequestion)butin2026?Thatdoesn’tmakesensebecausewearein2023.Sowemustclarifythatwearein2023and2026isinthefuture.10.Alternativeinterpretation:Theusermightbeaskingforthedateoftomorrowintheyear2026,meaningiftodayisthesamedayoftheyearin2026asitisnowin2023?Butthatisnotastandardinterpretationandthequestiondoesnotspecify.11.Conclusion:Thequestionisambiguouswithoutareferencepoint.Wemustaskforthecurrentdatein2026orexplainthatweneedthatinformation.Butnote:Theusermightbeaskingthisquestiononaspecificday(say,June15,2023)andwanttoknowwhatdateitwouldbetomorrowin2026?Thatis,thesamedayoftheyear?However,thequestiondoesnotspecifythat.Let’sre-read:"Whatdateof2026isittomorrow?“Thisisambiguous.Itcouldmean:OptionA:“Iftodayissomedayin2026,whatdateistomorrow(in2026)?”->requiresknowingtodayin2026.OptionB:“Whatistomorrow’sdate(in2023)andthenwhatwouldthatsamedaybein2026?”->butthatwouldbethesamedayoftheyear,buttheyearchanges.However,thequestionsays"dateof2026”,soOptionBwouldnotbeadatein2026iftomorrowin2023isin2023.Forexample

      I pressed ‘Stop’ because I’m pretty sure the poor guy was in a logic loop, and I didn’t want to heat up the oceans any further with my diabolically-complex line of query.

    • TheTechnician27@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      19 hours ago

      I mean… Yeah. Anyone who knows even the first thing about how an LLM works is going tell you it’s not qualified to answer that. That niche functionality would need to be tacked onto the LLM as, say, a deterministic algorithm it could call and report the results from.

      In a world of plenty of valid arguments against widespread generative AI, you chose one that at best says “people need to be more educated about generative AI before using it” and at worst says “I need to be more educated about generative AI before campaigning against it”.

      • pulsewidth@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        13 hours ago

        This is the ‘next level natural language technology’?

        Always some AI white knights in the comments, bravely telling people ‘you’re using AI wrong. you’re ignorant. you’re uneducated about generative AI’, as though this isn’t literally the first thing that the market-leader OpenAI suggests you use it for presently.

        12969

        • TheTechnician27@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          5 hours ago

          bravely telling people ‘you’re using AI wrong.

          More like telling that person they lack such a bare minimum understanding of how an LLM works that it’s comical. This is as fucking stupid as somebody complaining that their band saw can’t trim their fingernails.

          literally the first thing that the market-leader OpenAI suggests you use it for presently

          “Quiz me on vocabulary”? Oh, yeah, you know, I remember all those vocabulary quizzes I had in school that asked: “How many times does the letter ‘t’ appear in ‘platitudinous’?” Oh, wait, no, it’s referring to things like meaning, usage, examples sentences, etc. – actual vocabulary questions.

          I don’t use LLMs since I don’t find myself ever needing them, and you’ll find I don’t pull punches with them either, but since you’re whining that the placeholder text in the input box is misleading, I used it for a vocabulary question I would otherwise use Wiktionary for:

          Asking ChatGPT what the word "platitudinous" means. It answers correctly.

          Looks good to me, boss. Either you don’t understand what quizzing someone on vocabulary means or you assume the person is in kindergarten and needs to learn how to count the number of letters in a word.

          • pulsewidth@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 hours ago

            Correct spelling is the fundamental component of words, without words there is no vocabulary. Without understanding words, LLMs have absolutely no understanding of vocabulary. They can certainly spew out things they’ve tokenized and weighted from ingested inputs though - like when people trick it into believing false definitions through simply repeating them as correct and thereby manipulating (or poisoning) the weighting. ChatGPT and other LLMs regularly fail to interpret common parts of vocabulary - eg idioms, word spellings, action-reaction consequences in a sentence. They’re fancy autocomplete, filled with stolen (and occasionally licensed) data.

            Sure seems like the problem isn’t me or the other guy ‘dont know how to use LLMs’, but rather that they keep getting sold as something they’re not.

            Congrats though, you just used a 100 billion dollar machine array to more or less output the exact content of a Wikipedia article - you really proved your point that it’s very good when you know what to ask it, and us plebs are just dumb at questions, or something 👍 https://en.wikipedia.org/wiki/Platitude

    • phoh@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      20 hours ago

      claude says

      There are 2 c’s in “ice cream” — one in “ice” and one in “cream.“​​​​​​​​​​​​​​​​