I was looking into the new, probably AI, data center being built in town and noticed it’s built by a private equity backed firm. The data center was rejected by the city and has to operate with a standard cooperate building water supply. They said are switching to air cooling only and reducing the compute power to keep power usage the same. This has caused amazon, the alleged operator, to back out. So they are building a giant reduced capacity data center with no operator and apparently still think that’s a good idea. My understanding of the private equity bubble is that the firms can hide “under performing” assets because it’s all private. From what I read, possibly 3.2 Trillion dollars of it. I feel like this new data center is going on the “under performing” pile.


You’re pulling shit out of your ass at this point, there are some doom reports out of people suggesting that may be a problem, but there are also reports out of other companies(meta for example) with documentation saying the rate is much lower and the mean failure is 6+ years.
The other leftovers from the crash also won’t have that problem. It’s not just about GPUs. Datacenters and their infrastructure last a lot longer, and the electric generation/transportation networks will also potentially be useful for various alternative applications if the AI use case flops.
MTBF is absolutely not six years if you’re running your H100 nodes at peak load and heat soaking the shit out of them. ML workloads are particularly hard on GPU RAM in particular, and sustained heat load on that particular component type on the board is known to degrade performance and integrity.
As to Meta’s (or MS, or OpenAI, or what have you) doc on MTBF: I don’t really trust them on that, because they’re a big player in the “AI” bubble, so of course they’d want to give the impression that the hardware they’re using in their data centers still have a bunch of useful life left. That’s a direct impact to their balance sheet. If they can misrepresent extremely expensive components that they have a shitload of as still being worth a lot, instead of being essentially being salvage/parts only, I would absolutely expect them to do that. Especially in the regulatory environment in which we now exist.
I mean, we really don’t have the data to prove this either way.
https://www.tomshardware.com/tech-industry/artificial-intelligence/faulty-nvidia-h100-gpus-and-hbm3-memory-caused-half-of-the-failures-during-llama-3-training-one-failure-every-three-hours-for-metas-16384-gpu-training-cluster
Meta’s training of Llama3 405B model had a 1.34% failure rate for GPUs over the 54 days it ran, across 16387 gpus. It’s not likely that all of those faults led to bricked hardware either, they could have just lost part of their performance or memory.
The real question is does that test scale to the long term, often with hardware like this there’s a bathtub curve for failure. If those units used were brand new, many of the failures could have just been the initial wave of failures, and there could be a long period of relative stability that hadn’t even been seen yet.
GPU based coin mining demonstrated that GPUs often had a lifespan over 5 years of constant use before failure on consumer cards in often less than ideal operating conditions.