Oh yeah for sure, I’ve run Llama 3.2 on my RTX 4080 and it struggles but it’s not obnoxiously slow. I think they are betting more software will ship with integrated LLMs that run locally on users PCs instead of relying on cloud compute.
Oh yeah for sure, I’ve run Llama 3.2 on my RTX 4080 and it struggles but it’s not obnoxiously slow. I think they are betting more software will ship with integrated LLMs that run locally on users PCs instead of relying on cloud compute.
Data centres want the even beefier cards anyhow, but I think nVidia envisions everyone running local LLMs on their PCs because it will be integrated into software instead of relying on cloud compute. My RTX 4080 can struggle through Llama 3.2.
They aren’t making graphics cards anymore, they’re making AI processors that happen to do graphics using AI.
User: “Can we get Google?”
Microsoft: “But we already have Google at home!”
The Google at home: [reskinned Bing page]
I would take the pink one, then find my least favourite people and make the infinite poop copypasta into reality…
There are a shocking number of Elmo simps.
I just run Docker and my router maps ports to it. Container isolation and a basic firewall is more than enough for me.
Like are we talking what’s good enough security for hosting an anime waifu tier list blog or good enough security for a billion dollar corporation?
It’s not a “community”, it’s one person making all the posts because I guess they wanted to make hating Linux their entire personality. 🤷
They want control of governments so they can wield total authority over the working class and exploit them even harder than they already do. Amazon will make showing up to work late a felony punishable with jail time. Oh and the jail is actually an Amazon fulfillment centre.