cm0002@programming.dev to LocalLLaMA@sh.itjust.worksEnglish · 2 months agoRunning Local LLMs with Ollama on openSUSE Tumbleweednews.opensuse.orgexternal-linkmessage-square27fedilinkarrow-up117arrow-down13
arrow-up114arrow-down1external-linkRunning Local LLMs with Ollama on openSUSE Tumbleweednews.opensuse.orgcm0002@programming.dev to LocalLLaMA@sh.itjust.worksEnglish · 2 months agomessage-square27fedilink
minus-squarebrucethemoose@lemmy.worldlinkfedilinkEnglisharrow-up2·edit-22 months agoOne more thing, you don’t have to get something shiny and new to speed LLMs up. Even if you have like a 4-6GB GPU collecting dust somehwere, you can still use it to partially offload MoE models to great effect.
One more thing, you don’t have to get something shiny and new to speed LLMs up. Even if you have like a 4-6GB GPU collecting dust somehwere, you can still use it to partially offload MoE models to great effect.