• 0 Posts
  • 10 Comments
Joined 2 years ago
cake
Cake day: June 13th, 2023

help-circle
  • No worries mate, we can’t all be experts of every field and every topic!

    Besides there are other AI models that are relatively small and depend on processing power more than RAM. For example there’s a bunch of audio analysis tools that don’t just transcribe information but also diarise it (split it up by speaker), extract emotional metadata (e.g. certain models can detect sarcasm quite well, others spot general emotions like happiness or sadness or anger), and so on. Image categorisation models are also super tiny, though usually you’d want to load them into the DSP-connected NPU of appropriate hardware (e.g. a newer model “smart” CCTV camera would be using a SoC that has NPU to load detection models into, and do the processing for detecting people, cars, animals, etc. onboard instead of on your NVR).

    Also by my count, even somewhat larger training systems such as micro wakeword training, would fit into the 196MB V-Cache.







  • Well, yeah, when management is made up of dumbasses, you get this. And I’d argue some 90% of all management is absolute waffles when it comes to making good decisions.

    AI can and does accelerate workloads if used right. It’s a tool, not a person replacement. You still need someone who can utilise the right models, research the right approaches and so on.

    What companies need to realise is that AI accelerating things doesn’t mean you can cut your workforce by 70-90%, and still keep the same deadlines, but that with the same workforce you can deliver things 3-4 times faster. And faster delivery means new products (let it be a new feature or a truly brand new standalone product) have a lower cost basis even though the same amount of people worked on them, and the quicker cadence means quicker idea-to-profits timeline.


  • It actually makes some sense.

    On my 7950X3D setup the main issue was always making sure to pin games to a specific CCD, and AMDs tooling is… quite crap at that aspect. Identifying the right CCD was always problematic for me.

    Eliminating this by adding V-Cache to both CCDs so it doesn’t matter which one you pin it to is a good workaround. And IIRC V-Cache also helps certain (local) AI workflows as well, meaning running a game next to such a model won’t cause issues, as both gets its own CCD to run on.


  • Doing a level of local computing on certain devices (especially ones you directly interact with and voice interfacing can matter, say, like, a TV) is useful.

    I think the best approach is connected edge computing - combining some local computing and the hub of edge computing, and changing which side takes care of business depending on the needs of the task.

    Say, having the ability to turn off the oven when you can smell smoke (or remembering you haven’t set a timer and the food is ready), simply by talking to your washing machine while you’re loading it, is a useful perk. Sure, an edge case, but the moment it becomes needed, even just once, you’ll appreciate it.