• Dran@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    8 hours ago

    That is correct, but you might be missing why this is useful. MoE models are great for CPU inference, which is considerably cheaper than GPU inference at scale. The qwen 30b_a3b MoE and 8b dense models were widely considered similar in quality. If you have the vram, the 8b would be faster. If you don’t, then the 30b would be faster (as long as you had the ~19-22gb of ram required)

    A very inexpensive used server with lots of memory channels but no gpu can do very cost-efficent inference in this scenario and loads of people are asking for this.