Thank you for deciding to engage with our community here! You’re in good company.
Kobold just released a bunch of tools for quant making you may want to check out.
I have not made my own quants. I usually just find whatever imatrix gguf bartowlski or the other top makers on HF release.
I too am in the process of upgrading my homelab and opening up my model engine as a semi public service. The biggest performance gains ive found are using CUDA and loading everything in vram. So far just been working with my old nvidia 1070ti 8gb card.
Havent tried vllm engine just kobold. I hear good things about vllm it will be something to look into sometime. I’m happy and comfortable with my model engine system as it got everything setup just the way I want is but I’m always open to performance optimization.
If you havent already try running vllm with its CPU nicencess set to highest priority. If vllm can use flash attention try that too.
I’m just enough of a computer nerd to get the gist of technical things and set everything up software/networking side. Bought a domain name, set up a web server and hardened it. Kobolds webui didnt come with https SSL/TLS cert handling so I needed to get a reverse proxy working to get the connection properly encrypted.
I am really passionate about this even though so much of the technical nitty gritty under the hood behind models goes over my head. I was inspired enough to buy a p100 Tesla 16gb and try shoving it into an old gaming desktop which is my current homelab project. I dont have a lot of money so this was months of saving for the used server class GPU and the PSU to run it + the 1070ti 8gb I have later.
The PC/server building hardware side scares me but I’m working on it. I’m not used to swapping parts out at all. when I tried to build my own PC a decade ago it didnt last long before something blew so there’s a bit of residual trauma there. I’m worried about things not fit right in the case, or destroying something or the the card not working and it all.
Those are unhealthy worries when I’m trying to apply myself to this cutting edge stuff. I’m really trying to work past that anxiety and just try my best to install the stupid GPU. I figure if I fail I fail thats life it will be a learning experience either way.
I want to document the upgrade process journey on my new self hosted site. I also want to open my kobold service to public use by fellow hobbyist. I’m not quite confident in sharing my domain on the public web though just yet I’m still cooking.
nods and continues to use original doom wads with the red cross design for health pickups because the green one from BFG editions look like shit
Right now THCA mail-order is under fire from goons in house and senate so if your gonna order on bulk legally may want to do it soon the lawmaking could go either way. I recommend eight horse hemp for cheap mid bulk and wnc-cbd for the top shelf premium
Have you by chance checked out kobold.cpp lite webUI? It allows some of what your asking for like RAG for worldbuilding, adding images for the llm to describe to add into the story, easy editing of input and output, lots of customization in settings. I have a public instance of kobold webui setup on my website and I’m cool with allowing fellow hobbyist using my compute to experiment with things. If your interested in trying it out to see if its more what youre looking for, feel free to send me a pm and I’ll send you the address and a api key/password.
In an ideal work what exactly would you want an AI integrated text editor to do? Depending on what you need to have happen in your workflow you can automate copy pasting and automatic output logging with python scripts and your engines api.
Editing and audiing stories isnt that much different from auditing codebases. It all boils down to the understanding and correct use of language to convey abstraction. I bet tweaking some agebic personalities and goals in vscode+roo could get you somewhere
Yesss lol 🫠😵💫🤭
Aaaa that looks like some dry ass ground trim if its even bud at all it looks like a cookibg spice the grains are very long and narrow
Good to hear you figured it out with router settings. I’m also new to this but got all that figured out this week. As other commenters say I went with a reverse proxy and configured it. I choose caddy over nginx for easy of install and config. I documented just about every step of the process. I’m a little scared to share my website on public fourms just yet but PM me ill send you a link if you want to see my infrastructure page where I share the steps and config files.
Nice post Hendrik thanks for sharing your knowledge and helping people out :)
I once got kobold.CPP working with their collection of TTS model+ wav tokenizer system. Here’s the wiki page on it.
It may not be as natural as a commercial voice model but may be enough to wet your appetite in the event that other solutions feel overwhelmingly complicated
smokes bowl and smashes the rock against the door handle for 30 minutes.
“Oh it wasnt locked, I just turned the handle the wrong way”
Wow this is some awese information Brucethemoose thanks for sharing!
I hope you dont mind if I ask some things. Tool calling is one of those things I’m really curious about. Sorry if this is too much please dont feel pressured you dont need to answer everything or anything at all. Thanks for being here.
I feel like a lot of people including myself only vaguely understand tool calling, how its supposed to work, and simple practice excersises to use it on via scripts and APIs. What’s a dead simple python script someone could cook to tool call within the openai-compatable API?
In your own words what exactly is tool calling and how does an absolute beginner tap into it? Could you clarify what you mean by ‘tool calling being built into their tokenizers’?
Would you mind sharing some sources where we can learn more? I’m sure huggingface has courses but maybe you know some harder to find sources?
Is tabbyAPI an engine similar to ollama, llama.cpp, ect?
What is elx2,3, ect?
Pangolin.
Yes it would have been awesome of them to release a bigger one for sure :( At the end of the day they are still a business that needs a product to sell. I don’t want to be ungrateful complaining that they dont give us everything. I expect some day all these companies will eventually clam up and stop releasing models to the public all together once the dust settles and monopolies are integrated. I’m happy to be here in an era where we can look forward to open licence model released every few months.
Ding Ding, check this comment chain for your answer. Today you, tomorrow me.
Question one: yes and no. Most of the vomiting emojis shared here in comments are fake made using googles emoji kitchen thing. But there are many real modifiers for emojis like skin color or adding accents like tildes to regular english alphabet characters.
Question two: Modern keyboards typically have most emojis built in for you to select through. I dont think typing in the unicode values will automatically convert on phone operating systems but this might help if using windows or programming into a website.
the XKCD explained article on this actually gave some really great info.
Devstral was released recently specifically trained for tool calling in mind. I havent personally tried it out yet but people say it works good with vscode+roo
Thanks for the input! I do eventually plan on making some scripts and a custom web interface to interact with/expose some local services on my network once I have the basics of HTML covered as part of a portfolio thing so would like to cover my ass early and not have problems later
What does an MCP server do?