cm0002@piefed.world to LocalLLaMA@sh.itjust.worksEnglish · 16 days agoollama 0.11.9 Introducing A Nice CPU/GPU Performance Optimizationwww.phoronix.comexternal-linkmessage-square12fedilinkarrow-up134arrow-down10
arrow-up134arrow-down1external-linkollama 0.11.9 Introducing A Nice CPU/GPU Performance Optimizationwww.phoronix.comcm0002@piefed.world to LocalLLaMA@sh.itjust.worksEnglish · 16 days agomessage-square12fedilink
minus-squareafaix@lemmy.worldlinkfedilinkEnglisharrow-up0·15 days agoDoesn’t llama.cpp have a -hf flag to download models from huggingface instead of doing it manually?
minus-squarepanda_abyss@lemmy.calinkfedilinkEnglisharrow-up1·15 days agoIt does, but I’ve never tried it, I just use the hf cli
Doesn’t llama.cpp have a -hf flag to download models from huggingface instead of doing it manually?
It does, but I’ve never tried it, I just use the hf cli