

We recommend setting temperature=0.8 and top_p=0.9 in the sampling parameters.
Try that. I believe those params are available I Kobold. Id that doesn’t work, send me a sample of what you’re doing and I’ll try it out


We recommend setting temperature=0.8 and top_p=0.9 in the sampling parameters.
Try that. I believe those params are available I Kobold. Id that doesn’t work, send me a sample of what you’re doing and I’ll try it out


How are you running it? Would you be able to post your run arguments?


I’m familiar with that thread. I have a C1 tv which I wanted to use with an Intel iGPU at 4K120 HDR. The Intel HDMI output can only do 4K60 outside of Wndows. The iGPU’s dp does work at 4K120HDR. Out of curiosity, I also confirmed it works on a 7900xt at 4K120 HDR. I did this on Kubuntu and CachyOS with Plasma on Wayland
I used this cable: https://www.amazon.com/dp/B0D7VP726N
Not an ideal setup as HDMI 2.1 had slightly more bandwidth which translates to better picture quality, but it works better for me


Can we make an extension for Firefox and call it Sloppy-Stoppy?


The brain is incredibly malleable and, for a lot of people, memory is a vague image or a concept of something which happened. For a smaller subset, visual memory and visual imagination is not possible. Pictures are a more permanent visual representation, which can be additive to an experience. That’s not to say you shouldn’t live in the moment or that you should take pictures in lieu of making memories. You do you. I’m biased because I’m a photographer though.


This is really neat. Thank you. I would love a script or a more newb-friendly guide, not just for me, but for a lot of other users.
Can I make a suggestion? Post your script on github or similar with a proper (open) liscence so people can make suggestions or versions they find useful.


I’ve been on the internet a long time and this made me say “what the fuck” out loud
Edit: not sure whether I should ask what this all is or if ibshpuld complement you on your “output”


3090 24gb ($800 USD)
3060 12gb x 2 if you have 2 pcie slots (<$400 USD)
Radeon mi50 32gb with Vulkan (<$300 ) if you have more time, space, and will to tinker


I have a MI50/7900xtx gaming/ai setup at homr which in i use for learning and to test out different models. Happy to answer questions


no multiplayer paywall
Until Microsoft changes the deal Or you have to scan your retinas to verify watching an ad before you queue for a round of Halo CE Re-Campaign remake HD remaster Master Chief Cortana Limited Edition


Holy cow!


That fixed it.
I am a fan of this quant cook. He often posts perplexity charts.
https://huggingface.co/ubergarm
All of his quants require ik_llama which works best with Nvidia CUDA but they can do a lot with RAM+vRAM or even hard drive + rams. I don’t know if 8gb is enough for everything.


You are not alone. It blew my mind at how good it is per billion parameters. As an example, I can’t think of another model that will give you working code at 4B or less. I havent tried it on agentic tasks but that would be interesting


Im not sure if it’s a me issue but that’s a static image. I figure you posted where they throw a brick into it.
Also, if this post was serious, how does a highly quantitized model compare to something less quantitized but with fewer parameters? I haven’t seen benchmarks other than perplexity which isn’t a good measure of capability?


Damn. I’m turning in my programmer badge and service weapon


I have had similar work contracts.
I want you to be successful. Don’t hurt your future career prospects


Rover back on XP


Wow! Mesmerizing
If it’s anything like Starlink, between 20-70ms. Better than some Comcast connections I’ve had?