- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
LLMs
OMG. I’m shocked RMS let THAT happen.
supports local LLMs
He consistently describes LLMs as “bullshit generators” so the LLM being local doesn’t help that much. It’s better for privacy but that’s a separate matter.
stallman has always been flowery with his language and to be fair they can be bullshit generators. I doubt he sees no value in them but I have not really heard any of his talks on them.
Look at his blog on stallman.org. It comes up all the time there.
yeah im seeing what was my estimation of what his opinion would be. Take this for example:
" I agree that bullshit summaries (as they are now) are a bad thing, partly because they are made by programs which are not intelligent, so they are often confused and misrepresent what the site really says."
as they stand now would suggest they have some sort of promise. Mostly what I see is him railing against treating their output as intelligent when summarizing and that they lack understanding of their output because. well. they are not intelligent. I fully agree with him here.
Well, there can be FOSS LLMs so why not support them?
Any examples of these, please?
Thank you!
Not that I am aware of atm, but I am very certain that people are working on them.
I may be slipping up on jargon.😅
I think the versions of deepseek you can get from olama are FOSS. I have that running on my homelab and can access it with open webui. Are you looking for something like that? I could link some stuff.
Thanks! I will do some searching on my own, and your comment is a good starting point. I will probably ask you for links if I’m unable to find anything.
May I ask what kind of hardware you use to run your LLMs? Like, do you have a rack full or GPUs?
I got an old machine off eBay(see pic) I only run models that are 8b parameters or less.
I got Ubuntu server on it. Then docker running in that. In docker I have olama, open web UI, jellyfin and a game sever. No issues running any of that.

Edit: if you want something that can run better LLMs I recommend more RAM and a better GPU
Nice! Do you use the models for coding? Or image generation, for example?
I use them mostly for helping me write emails or meal prepping tbh lol. I’ve used deep seek to help me with python before but if you’re not just dicking around like me you’d definitely want something more powerful.
For image generation it sounds like this tool called comfy UI is the way to go. I have it running in docker but haven’t set anything up inside it yet.
It’s pretty neat, I really set this up to help keep my data out of the hands of the corps and the feds lol.





