Technology fan, Linux user, gamer, 3D animation hobbyist

Also at:

[email protected]

[email protected]

  • 5 Posts
  • 28 Comments
Joined 2 years ago
cake
Cake day: July 24th, 2023

help-circle

  • And we make it worse by saying “Just pick one. It doesn’t matter what instance you’re on because they’re federated.”

    Some people are going to be very upset to find their local feed is a lot of content they don’t agree with. Or when they go out into the fediverse and people automatically assume they’re an A-hole because of the instance they’re from. I mean, it’s generally not that bad, but there are a few instances that are that bad.

    And for people like me who gravitate toward smaller instances, that instance is probably gonna die. Happened to me twice already, 4 times if you count Mastodon and Peertube.








  • I mean I don’t really see the point here.

    There isn’t one. I guess I should have made that more clear. Sorry. 🫤

    And I’m not sure if I’m missing something …

    Nope, just a guy with too much time on his hands. I mean, I hope someone out there found it a little informative. There are a lot of people thinking “If Ollama doesn’t work then I’m out of luck.” I’m just trying to let people know there are other options.

    Yes, the Nvidia cards get 30+ t/s together or individually, but the point of this was to see if AMD and Nvidia could work together. Now that this works, I might actually buy an AMD GPU.


  • No, I didn’t change my drivers at all. I figured it would probably work with older drivers. But then the problem is my 4060 won’t work with anything older than 545(I think).

    I do have another PC I could put the 770 in. That might be worth trying.

    It just kills me to have these old cards sitting around doing nothing. The 770 was kind of a beast in its time. But that’s life.

    I guess I could donate them to some Peertuber who does retro videos or something.

    A few pics of them in their heyday (ok, they were already past their prime at the time)


  • I’ve read you can force the new Vulkan Driver on it with some kernel flags.

    Not gonna lie, that sounds beyond my scope. Once I got llama.cpp compiled, everything just worked. I would have no idea how to troubleshoot anything.

    I saw a video where someone got the new Indiana Jones running on a Vega 64, so there’s really no telling what’s possible on AMD hardware. They put so much effort into designing chips, but so little into supporting them.

    My laptop (RX 7600s) gets more tokens/sec in Vulkan than in ROCm. I don’t know what that’s all about.






  • Fair enough. I’ve got a 3060 and 4060ti installed, and I need to buy a new PSU to power up a 1080ti externally.

    I think the only thing I use that requires CUDA is Daz Studio, and I’m actually starting to lose interest in that.

    The thing about Radeon is it’s not just less money, it’s also less performance, plus more work to set up. I bought a Radeon laptop just so I could try ROCm. It works, but it’s no walk in the park. And when you’re done, you don’t get any benefit over using Nvidia. If AMD at least gave us more VRAM, that would be something.