Unless they’ve come up with some truly novel new architecture that makes training and/or inference MUCH more efficient… why? The generational creep is obviously unsustainable, and using current-style GPUs for ML applications is a recipe for obsolescence for a hardware manufacturer these days.
Unless they’ve come up with some truly novel new architecture that makes training and/or inference MUCH more efficient… why? The generational creep is obviously unsustainable, and using current-style GPUs for ML applications is a recipe for obsolescence for a hardware manufacturer these days.