• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    7 hours ago

    Kobold.cpp is fantastic. Sometimes there are more optimal ways to squeeze models into VRAM (depends on the model/hardware), but TBH I have no complaints.

    I would recommend croco.cpp, a drop-in fork: https://github.com/Nexesenex/croco.cpp

    It has support for more the advanced quantization schemes of ik_llama.cpp. Specifically, you can get really fast performance offloading MoEs, and you can also use much higher quality quantizations, with even ~3.2bpw being relatively low loss. You’d have to make the quants yourself, but it’s quite doable… just poorly documented, heh.

    The other warning I’d have is that some of it’s default sampling presets are fdfunky, if only because they’re from the old days of Pygmalion 6B and Llama 1/2. Newer models like much, much lower temperature and rep penalty.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 hours ago

      Thanks for the random suggestion! Installed it already. Sadly as a drop-in replacement it doesn’t provide any speedup on my old machine, it’s exactly the same number of tokens per second… Guess I have to learn about the ik_llama.cpp and pick a different quantization of my favourite model.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        What model size/family? What GPU? What context length? There are many different backends with different strengths, but I can tell you the optimal way to run it and the quantization you should run with a bit more specificity, heh.