I have a Gen1 Threadripper system. I have a mixed gaming, but mostly workstation workload. In modern, unoptimised games my GPU (rtx3000 product line) is already being CPU bottlenecked, but only slightly, 5-10%. And it has too little VRAM for properly accelerating my workstation tasks.
I’d like to upgrade my hardware with an AMD 1st gen DDR6 CPU (prob. 2027) and buy an according GPU in the same year. I’m planning a usage duration for at least 10 years and then probably same thing but with DDR8.
My priority is to have an excellent price/performance ratio. I only want to buy something new, if I know it will last me a long time.
How good is my plan at accomplishing my goal? I’d like some feedback please. How would you go about it?
Not sure, but I assume a couple of years at least - it might be also affected by stupid external factors, like insane tariffs and such. Also it will take some time for DDR6 perf to go up - for both mobos and memory. So even at the start you might be still better off DDR5 and if you go with DDR6 you might need to replace mobo and memory to get better perf. That’s my impression on hardware situation, I might be wrong, though.
What is your use case for threadripper, I’m curious? AFAIK it’s not a good match for gaming at least. Or is it?
I mainly use my workstation for Image editing (raw development and VFX), 3d animation and video editing. Then there’s occasional ML inference for image generation or text generation. And lastly, some video games.
About video games: the 1st gen threadripper platform gained a bad reputation for gaming thanks to windows. I used to use Windows for so long and once I switched to GNU Linux it was like I got a new CPU for free. The reason is, Windows doesn’t know how to properly do multi-threading, adding to that, my 1st gen Threadripper is basically 4 CPU dies glued together and for low latency applications like games the performance on windows will be trash and oh boy, it was. But on GNU Linux its fine. But compared to all cores on one die, it will be worse for games, yes.
I was kinda guessing that images, video and such are your use case - yep, those might really benefit from a faster memory and plenty of cores. Not sure if it affect ML much since it’d be calculated on GPU. Thanks for info on gaming.