• hendrik@palaver.p3x.de
    cake
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 days ago

    Thanks for the numbers. Btw, I think a NPU can’t run large language models in the first place. They’re meant for things like blur the background in video conferences, or help with speech recognition or such very specific smaller tasks. They only have some tens or hundreds of megabytes of memory, so a LLM/chatbot won’t fit. The main thing that makes LLM inference faster is memory (RAM) bandwith and speed.