That 94% does not include AI GPUs at all. Just ‘pro’ cards like the RTX Pro, and gaming ones.
The vast majority of inference/training is done on top end SXM GPUs like the H200, MI325X and such, and sometimes on server only PCIe cards like the L40. But the statistic you’re quoting is for ‘Addin boards,’ which don’t include this server stuff.
There are tinkerers who run ML stuff locally on desktop GPUs, but its a pretty small market, and honestly they’re all useless for many experiments because they don’t have enough VRAM.
Point I’m making is ‘AI GPUs’ don’t skew this statistic much; it’s gamers pushing that 94% market share.
That 94% does not include AI GPUs at all. Just ‘pro’ cards like the RTX Pro, and gaming ones.
The vast majority of inference/training is done on top end SXM GPUs like the H200, MI325X and such, and sometimes on server only PCIe cards like the L40. But the statistic you’re quoting is for ‘Addin boards,’ which don’t include this server stuff.
There are tinkerers who run ML stuff locally on desktop GPUs, but its a pretty small market, and honestly they’re all useless for many experiments because they don’t have enough VRAM.
Point I’m making is ‘AI GPUs’ don’t skew this statistic much; it’s gamers pushing that 94% market share.
Ah ok, so specific models. Gotcha, thanks