• gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    22 hours ago

    Unless they’ve come up with some truly novel new architecture that makes training and/or inference MUCH more efficient… why? The generational creep is obviously unsustainable, and using current-style GPUs for ML applications is a recipe for obsolescence for a hardware manufacturer these days.