For gaming, sure, but not for compute workloads. For a task like Blender Cycles rendering, it’s about on par with a RTX 4060. I’m looking forward to RDNA 5 with more competitive RTX cores, among other things.
That’s fair. In blender specifically is that a matter of Optix being way better though? I think part of the issue is also software. It seems like ROCm is getting a lot closer to cuda (although there isn’t a lot of implementation) but optix is just too good.coild be entirely wrong though haven’t had to deal with GPU compute for a bit.
AFAIK, the main reason Optix is faster is because AMD’s current-gen RT cores are just repurposed texture cores, whereas RDNA 5 supposedly has cores specifically designed for RT / path tracing like Nvidia has.
Well that’s pretty cool then. Of course the other challenge will be getting people to implement amd software which might be even bigger a challenge. There’s so much investment in cuda everywhere. Here’s hoping memory prices will be better by then as well. This was the year I was supposed to do a platform upgrade but the slop producers cranked prices. No ssd NAS for me either.
Yeah, the sloppers are ruining everything and hopefully memory prices will be down by the time RDNA 5 comes out so we can finally have access to decent mid-range GPUs that can handle compute workloads and don’t have artificial VRAM constraints to drive sales of more expensive cards.
I mean the 9070xt is already pretty good…
For gaming, sure, but not for compute workloads. For a task like Blender Cycles rendering, it’s about on par with a RTX 4060. I’m looking forward to RDNA 5 with more competitive RTX cores, among other things.
That’s fair. In blender specifically is that a matter of Optix being way better though? I think part of the issue is also software. It seems like ROCm is getting a lot closer to cuda (although there isn’t a lot of implementation) but optix is just too good.coild be entirely wrong though haven’t had to deal with GPU compute for a bit.
AFAIK, the main reason Optix is faster is because AMD’s current-gen RT cores are just repurposed texture cores, whereas RDNA 5 supposedly has cores specifically designed for RT / path tracing like Nvidia has.
Edit:
Yeah, here’s an article about this topic: https://hardwaretimes.com/amd-plays-catchup-to-nvidia-rdna-5-adds-rt-traversal-hw-blends-compute-units-into-tensors/
Well that’s pretty cool then. Of course the other challenge will be getting people to implement amd software which might be even bigger a challenge. There’s so much investment in cuda everywhere. Here’s hoping memory prices will be better by then as well. This was the year I was supposed to do a platform upgrade but the slop producers cranked prices. No ssd NAS for me either.
Yeah, the sloppers are ruining everything and hopefully memory prices will be down by the time RDNA 5 comes out so we can finally have access to decent mid-range GPUs that can handle compute workloads and don’t have artificial VRAM constraints to drive sales of more expensive cards.