It’s no surprise that NVIDIA is gradually dropping support for older videocards, with the Pascal (GTX 10xx) GPUs most recently getting axed. What’s more surprising is the terrible way t…
Even now, CUDA is gold standard for data science / ML / AI related research and development. AMD is slowly brining around their ROCm platform, and Vulcan is gaining steam in that area. I’d love to ditch my nvidia cards and go exclusively AMD but nvidia supporting CUDA on consumer cards was a seriously smart move that AMD needs to catch up with.
CUDA is an Nvidia technology and they’ve gone out of their way to make it difficult for a competitor to come up with a compatible implementation. With cross-vendor alternatives like OpenCL and compute shaders, they’ve not put resources into achieving performance parity, so if you write something in both CUDA and OpenCL, and run them both on an Nvidia card, the CUDA-based implementation will go way faster. Most projects prioritise the need to go fast above the need to work on hardware from more than one vendor. Fifteen years ago, an OpenCL-based compute application would run faster on an AMD card than a CUDA-based one would run on an Nvidia card, even if the Nvidia card was a chunk faster in gaming, so it’s not that CUDA’s inherently loads faster. That didn’t give AMD a huge advantage in market share as not very much was going on that cared significantly about GPU compute.
Also, Nvidia have put a lot of resources over the last fifteen years into adding CUDA support to other people’s projects, so when things did start springing up that needed GPU compute, a lot of them already worked on Nvidia cards.
Even now, CUDA is gold standard for data science / ML / AI related research and development. AMD is slowly brining around their ROCm platform, and Vulcan is gaining steam in that area. I’d love to ditch my nvidia cards and go exclusively AMD but nvidia supporting CUDA on consumer cards was a seriously smart move that AMD needs to catch up with.
Sorry for prying for details, but why exactly do you need NVIDIA?
CUDA is an Nvidia technology and they’ve gone out of their way to make it difficult for a competitor to come up with a compatible implementation. With cross-vendor alternatives like OpenCL and compute shaders, they’ve not put resources into achieving performance parity, so if you write something in both CUDA and OpenCL, and run them both on an Nvidia card, the CUDA-based implementation will go way faster. Most projects prioritise the need to go fast above the need to work on hardware from more than one vendor. Fifteen years ago, an OpenCL-based compute application would run faster on an AMD card than a CUDA-based one would run on an Nvidia card, even if the Nvidia card was a chunk faster in gaming, so it’s not that CUDA’s inherently loads faster. That didn’t give AMD a huge advantage in market share as not very much was going on that cared significantly about GPU compute.
Also, Nvidia have put a lot of resources over the last fifteen years into adding CUDA support to other people’s projects, so when things did start springing up that needed GPU compute, a lot of them already worked on Nvidia cards.