Hey. Yeah you. No don’t look over your shoulder. I’m not talking to the guy behind you. Look, we’ve been meaning to tell you that you’re doing a pretty good job out there. Proud of you. Keep up the good work.

  • 4 Posts
  • 226 Comments
Joined 1 year ago
cake
Cake day: November 18th, 2024

help-circle











  • I am young and have a computer science degree, and I still struggle at times. I get it.

    For games, I’d try to install steam and run them through steam if thats how you’d normally do it on windows. Then for me the main setting to play with (on a game by game basis) is setting the game to use proton (in the compatibility settings of the game) and whether or not to use steam input for controller support.

    If you are trying to install a non steam game, maybe look into lutris. Though I’m on the techy side, and I hear a lot of people like heroic game launcher on the less techy side.

    Good luck. I think it’s fair to run out of energy while trying get the right combo, but if ya stick to it I’m confident you’ll find the set up that works for you.





  • Yeah setting up openwebui with llamacpp is pretty easy. I would start with building llamacpp by cloning it from github and then following the short guide for building it linked on the readme. I don’t have a Mac, but I’ve found building it to be pretty simple. Just one or two commands for me.

    Once its built just run llama-sever with the right flags telling it to load model. I think it can take huggingface links, but I always just download gguf files. They have good documentation for llama-server on the readme. You also specify a port when you run llama-server.

    Then you just add http://127.0.0.1:PORT_YOU_CHOSE/v1 as one of your openai api connections in the openwebui admin panel.


    Separately, if you want to be able to swap models on the fly, you can add llama-swap into the mix. I’d look into this after you get llamacpp running and are somewhat comfy with it. You’ll absolutely want it though coming from ollama. At this point its a full replacement IMO.


  • With 128GB of ram on a Mac, GLM 4.5 Air is going to be one of your best options. You could run it anywhere from Q5 to Q8 depending on how you wanna manage your speed to quality ratio.

    I have a different system that likely runs it slower than yours will, and I get 5 T/s generation which is just about the speed I read at. (Using q8)

    I do hear that ollama may be having issues with that model though, so you may have to wait for an update to it.

    I use llamacpp and llama-swap with openwebui, so if you want any tips on switching over I’d be happy to help. Llamacpp is usually one of the first projects to start supporting new models when they come out.

    Edit: just reread your post. I was thinking it was a newer Mac lol. This may be a slow model for you, but I do think it’ll be one of the best your can run.