

no multiplayer paywall
Until Microsoft changes the deal Or you have to scan your retinas to verify watching an ad before you queue for a round of Halo CE Re-Campaign remake HD remaster Master Chief Cortana Limited Edition


Holy cow!


That fixed it.
I am a fan of this quant cook. He often posts perplexity charts.
https://huggingface.co/ubergarm
All of his quants require ik_llama which works best with Nvidia CUDA but they can do a lot with RAM+vRAM or even hard drive + rams. I don’t know if 8gb is enough for everything.


You are not alone. It blew my mind at how good it is per billion parameters. As an example, I can’t think of another model that will give you working code at 4B or less. I havent tried it on agentic tasks but that would be interesting


Im not sure if it’s a me issue but that’s a static image. I figure you posted where they throw a brick into it.
Also, if this post was serious, how does a highly quantitized model compare to something less quantitized but with fewer parameters? I haven’t seen benchmarks other than perplexity which isn’t a good measure of capability?


Damn. I’m turning in my programmer badge and service weapon


I have had similar work contracts.
I want you to be successful. Don’t hurt your future career prospects


Rover back on XP


Wow! Mesmerizing


Can anyone recommend a good video of this? I want to see something representative not whatever YouTube feels like should have been picked up by the algorithm


The awe and grandeur of Occarina Of Time… at the time.
Disco Elysium is the best literature I’ve ever played.
I still feel like used to live in Skyrim. It was a place where I wanted to be and explore.
TF2/Halo CE multilayer mix of copetitive adrenaline and funny shenanigans
Those are the game experiences which stuck with me.


Accept that quality matters more than velocity. Ship slower, ship working. The cost of fixing production disasters dwarfs the cost of proper development.
This has been a struggle my entire career. Sometimes, the company listens. Sometimes they don’t. It’s a worthwhile fight but it is a systemic problem caused by management and short-term profit-seeking over healthy business growth


Adding something like a filter can
perfectly remove all sound in the filtered range. So this is technically possible.
With a lot of sounds this is practically very hard because natural sounds have all kind of artifacts like reverb and harmonics which may not be in the range you’re filtering.
Another thing to consider is that filters are often gradual. They are not perfect hard cutoffs to 0 db. So you may need to change some settings and/or filter a bit more than you expect to get the coverage you want.


I don’t know if this is still useful for you, but I tried this out, mostly because I wanted to make sure I wasn’t crazy. Here’s my gpt-oss setup running on cheap AMD Instinct VRAM:
./llama-server \
--model {model}.gguf
--alias "gpt-oss-120b-mxfp4" \
--threads 16 \
-fa on\
--main-gpu 0 \
--ctx-size 64000 \
--n-cpu-moe 0 \
--n-gpu-layers 999 \
--temp 1.0 \
-ub 1536 \
-b 1536 \
--min-p 0.0 \
--top-p 1.0 \
--top-k 0.0 \
--jinja \
--host 0.0.0.0 \
--port 11343 \
--chat-template-kwargs '{"reasoning_effort": "medium"}'
I trimmed the content because it wasn’t relevant but left roughly the shape of the replies to give a sense of the verbosity.
Test 1: With default system message
user prompt: how do i calculate softmax in python
What is softmax
1 python + numpy
...
quick demo
...
2 SciPy
...
...
...
8 full script
...
running the script
...
results
...
TL;DR
...
followup prompt: how can i GPU-accelerate the function with torch
1 why pytorch is fast
...
...
**[Headers 2,3,4,5,6,7,8,9]**
...
...
TL;DR
...
Recap
...
Table Recap
...
Common pitfalls
...
Going beyond float32
...
10 Summary
...
Overall 6393 Tokens including reasoning
TEST 2 with this system prompt: You are a helpful coding assistant. Provide concise answers, to-the point answers. No fluff. Provide straightforward explanations when necessary. Do not add emoji and only provide tl;drs or summaries when asked.
user prompt: how do i calculate softmax in python
Softmax calculation in Python
...
Key points
...
followup prompt: how can i GPU-accelerate the function with torch
GPU‑accelerated Softmax with PyTorch
...
What the code does
...
Tips for larger workloads
...
Overall 1103 Tokens including reasoning


Totally. I think OSS is outright annoying with its verbosity. A system prompt will get around that


You could use it to force resolution, HDR, vrr, refresh rate. It helped you isolate this not being a compositor issue.


Qwen 3 or Qwen 3 Coder? Qwen3 comes in a 235B, 30B and smaller sizes. Qwen 3 Coder comes in a 30B or 480B size.
Open Router has multiple quant options and, for coding, I’d try to only use 8bit int or higher.
Claude also has a ton of sizes and deployment options with different capabilities.
As far as reasoning, the newest Deepseek V3.1 Terminus should be pretty good.
Honestly, all of these models should be able to help you up to a certain level with docker. I would double check how you connect to open router, making sure your hyperparams are good, making sure thinking/reasoning is enabled. Maybe try duck.ai and see if the models there are matching up to whatever you’re doing in open router.
Finally, not being a hater, but LLMs are not intelligent. They cannot actually reason or think. They can probabilistically align with answers you want to see. Sometimes your issue might be too weird or new for them to be able to give you a good answer. Even today models will give you docker compose files with a version number at the top, a feature which has been deprecated for over a year.
Edit: gpt-oss 120 should be cheap and capable enough. Available on duck.ai


I’m sure someone will give a better answer but this smells of a UEFI/secure boot problem. Look in your BIOS and turn those off or to legacy or “other os”
I have a MI50/7900xtx gaming/ai setup at homr which in i use for learning and to test out different models. Happy to answer questions