• sip@programming.dev
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    4 hours ago

    they don’t check. you gotta think in statistics terms.

    based on the previously inputed words (tokens actually, but I’ll use words for the sake of simplicity), which is the system prompt + user prompt, the LLM generates a list of the next possible words that makes most sense, then picks one from the top few. How much it goes down the list on lower possible words is based on temperature configuration. Then the next word, and the next, etc, each time looking back.

    I haven’t checked on the reasoning models, what that step actually does, but I assume it just expands the user prompt to fill in stuff that thr LLM thinks the user was lazy to input, then works on the final answer.

    so basically is like tapping on your phone keyboard next word prediction.

    • zeca@lemmy.eco.br
      link
      fedilink
      arrow-up
      1
      ·
      60 minutes ago

      The chatbots are not just LLMs though. They run scripts in which some steps are queries to an LLM.