petsoi@discuss.tchncs.de to Linux@lemmy.ml · 3 days agoRunning Generative AI Models Locally with Ollama and Open WebUI - Fedora Magazinefedoramagazine.orgexternal-linkmessage-square19fedilinkarrow-up185arrow-down115
arrow-up170arrow-down1external-linkRunning Generative AI Models Locally with Ollama and Open WebUI - Fedora Magazinefedoramagazine.orgpetsoi@discuss.tchncs.de to Linux@lemmy.ml · 3 days agomessage-square19fedilink
minus-squareMIXEDUNIVERS@discuss.tchncs.delinkfedilinkDeutscharrow-up3·3 days agoI did try to use it on Fedora but i have a Radeon 6700 XT and it only worked in the CPU. I wait until ROCM official support reaches my older Model.
minus-squarelelgenio@lemmy.mllinkfedilinkarrow-up3·3 days agoollam runs on the 6700 XT, but you need to add an environment variable for it to work… I just don’t remember what it was and am away from my computer right now
minus-squarepassepartout@feddit.orglinkfedilinkarrow-up3arrow-down1·3 days agoI have the same setup, you have to add the line Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0" for that specific GPU to the ollama.service file
I did try to use it on Fedora but i have a Radeon 6700 XT and it only worked in the CPU. I wait until ROCM official support reaches my older Model.
ollam runs on the 6700 XT, but you need to add an environment variable for it to work… I just don’t remember what it was and am away from my computer right now
I have the same setup, you have to add the line
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
for that specific GPU to the ollama.service file