This is a “big part” of my job. In five months what I’ve accomplished is adding AI usage to jira along with a way to indicate how many story points it wound up saving or costing. Let’s see how this plays out.
If AI collapses as many expect it to, this job will still be there without that requirement.
Yeah, self-hosted open-source models seem okay, as long as their training data is all from the public domain.
Hopefully RAM becomes cheap as fuck after the bubble pops and all these data centers have to liquidate their inventory. That would be a nice consolation prize, if everything else is already fucked anyway.
Unfortunately, server RAM and GPUs aren’t compatible with desktops. Also, NVidia have committed to releasing a new GPU every year, making the existing ones worth much less. So unless you’re planning to build your own data centre with slightly out-of-date gear - which would be folly, the existing ones will be desperate to recoup any investment and selling cheap - then it’s all just destined to become a mountain of e-waste.
I read I think just last week but for sure in the last month that someone has created an AI card that lowers power usage by 90%. (I know that’s really vague and leaves a lot of questions.) It seems likely that AI-specific hardware and graphics hardware will diverge — I hope.
Maybe that surplus will lay the groundwork for a solarpunk blockchain future?
I don’t know if I understand what blockchain is, honestly. But what if a bunch of indie co-ops created a mesh network of smaller, more sustainable server operations?
It might not seem feasible now, but if the AI bubble pops, Nvidia crashes spectacularly, data centers all need to liquidate their stock, and server compute becomes basically viewed as junk, then it might become possible…
Like AI, blockchain is a solution in search of a problem. Both have their uses but are generally part of overcomplicated, expensive solutions which are better done with more traditional techniques.
I would imagine any program running simulations, rendering environments, analyzing metadata, and similar tasks would be able to use it.
It would be useful for academic researchers, gamers, hobbyists, fediverse instances. Basically whatever capabilities they have now, they would be able to increase their computing power for dirt cheap.
Someone could make a fediverse MMO. That could be cool, especially when indie devs start doing what zuck never could with VR.
Google Stadia wasn’t exactly a responding success…
From a previous job in hydraulics, the computational fluid dynamics / finite element analysis that we used to do would eat all your compute resource and ask for more. Split your design into tiny cubes, simulate all the flow / mass balance / temperature exchange / material stress calculations for each one, gain an understanding of how the part would perform in the real world. Very easily parallelizable, a great fit for GPU calculation. However, it’s a ‘hundreds of millions of dollars’ industry, and the AI bubble is currently ‘tens of trillions’ deep.
Yes, they can be used for other tasks. But we’ve just no use for the amount that’s been purchased - there’s tens of thousands of times as much as makes any sense.
Agreed, AI has uses but c-suite execs have no idea what they are and are paying millions to get their staff using them in hopes of finding what those uses are. In reality they are making things worse with no tangible benefit because they are all scared that someone will find this imaginary golden goose first.
This is a “big part” of my job. In five months what I’ve accomplished is adding AI usage to jira along with a way to indicate how many story points it wound up saving or costing. Let’s see how this plays out.
If AI collapses as many expect it to, this job will still be there without that requirement.
I hope the bubble pops soon, and only smaller and more sustainable models stay
agreed
Yeah, self-hosted open-source models seem okay, as long as their training data is all from the public domain.
Hopefully RAM becomes cheap as fuck after the bubble pops and all these data centers have to liquidate their inventory. That would be a nice consolation prize, if everything else is already fucked anyway.
Unfortunately, server RAM and GPUs aren’t compatible with desktops. Also, NVidia have committed to releasing a new GPU every year, making the existing ones worth much less. So unless you’re planning to build your own data centre with slightly out-of-date gear - which would be folly, the existing ones will be desperate to recoup any investment and selling cheap - then it’s all just destined to become a mountain of e-waste.
I read I think just last week but for sure in the last month that someone has created an AI card that lowers power usage by 90%. (I know that’s really vague and leaves a lot of questions.) It seems likely that AI-specific hardware and graphics hardware will diverge — I hope.
Maybe that surplus will lay the groundwork for a solarpunk blockchain future?
I don’t know if I understand what blockchain is, honestly. But what if a bunch of indie co-ops created a mesh network of smaller, more sustainable server operations?
It might not seem feasible now, but if the AI bubble pops, Nvidia crashes spectacularly, data centers all need to liquidate their stock, and server compute becomes basically viewed as junk, then it might become possible…
I’m just trying to find a silver lining, okay?
Like AI, blockchain is a solution in search of a problem. Both have their uses but are generally part of overcomplicated, expensive solutions which are better done with more traditional techniques.
I wonder if the Server gpus can be used for other tasks than computing llms
I would imagine any program running simulations, rendering environments, analyzing metadata, and similar tasks would be able to use it.
It would be useful for academic researchers, gamers, hobbyists, fediverse instances. Basically whatever capabilities they have now, they would be able to increase their computing power for dirt cheap.
Someone could make a fediverse MMO. That could be cool, especially when indie devs start doing what zuck never could with VR.
Google Stadia wasn’t exactly a responding success…
From a previous job in hydraulics, the computational fluid dynamics / finite element analysis that we used to do would eat all your compute resource and ask for more. Split your design into tiny cubes, simulate all the flow / mass balance / temperature exchange / material stress calculations for each one, gain an understanding of how the part would perform in the real world. Very easily parallelizable, a great fit for GPU calculation. However, it’s a ‘hundreds of millions of dollars’ industry, and the AI bubble is currently ‘tens of trillions’ deep.
Yes, they can be used for other tasks. But we’ve just no use for the amount that’s been purchased - there’s tens of thousands of times as much as makes any sense.
Agreed, AI has uses but c-suite execs have no idea what they are and are paying millions to get their staff using them in hopes of finding what those uses are. In reality they are making things worse with no tangible benefit because they are all scared that someone will find this imaginary golden goose first.