

some linux users dream of having their grandma run linux so they never have to look at windows or macos ever again
some linux users dream of having their grandma run linux so they never have to look at windows or macos ever again
shockingly enough, corporations always act in their own self-interest
it’s additional rules for the subreddit on top of the site wide rules for all of reddit
the only effective way I found is to tuck it in and wear a small dick prosthetic
anyone who doesn’t understand lemmy after using reddit must be a real pea brain
that one makes me so uncomfortable, similar to Made in Abyss, worst horror show ever
yeah it will with that attitude
didn’t the show say elves only live a few thousand years? I was under the impression she’s like 1000-2000 years old and will die in another 1000-2000 or something
what’s more likely is that OpenAI just lost all their talent to other companies/startups
lmao that’s crazy cynical. manifesting a divorce
I don’t know a lick of japanese. I assume you translated the symbols they abused for english letters?
Q4 will give you like 98% of quality vs Q8 and like twice the speed + much longer context lengths.
If you don’t need the full context length, you can try loading the model at shorter context length, meaning you can load more layers on the GPU, meaning it will be faster.
And you can usually configure your inference engine to keep the model loaded at all times, so you’re not loosing so much time when you first start the model up.
Ollama attempts to dynamically load the right context lenght for your request, but in my experience that just results in really inconsistent and long time to first token.
The nice thing about vLLM is that your model is always loaded, so you don’t have to worry about that. But then again, it needs much more VRAM.
In my experience anything similar to qwen-2.5:32B comes closest to gpt-4o. I think it should run on your setup. the 14b model is alright too, but definitely inferior. Mistral Small 3 also seems really good. anything smaller is usually really dumb and I doubt it would work for you.
You could probably run some larger 70b models at a snails pace too.
Try the Deepseek R1 - qwen 32b distill, something like deepseek-r1:32b-qwen-distill-q4_K_M (name on ollama) or some finefune of it. It’ll be by far the smartest model you can run.
There are various fine tunes that remove some of the censorship (ablated/abliterated) or are optimized for RP, which might do better for your use case. But personally haven’t used them so I can’t promise anything.
LibreWolf or Brave. I use Brave personally, it’s okay
when are people going to learn that centralized social media will always be garbage…
pocket penussy bread
I don’t know, feddit.nl is pretty chill. I always see everything and barely anything objectable
Removed by mod