• 0 Posts
  • 1.26K Comments
Joined 11 months ago
cake
Cake day: February 10th, 2025

help-circle
  • Ohhh, those are UEFI cheats. This is the reason that kernel anti-cheat games require Secure Boot.

    You can, when Secure Boot is disabled, use the UEFI to load a driver that can perform DMA actions prior to loading the Windows kernel. A user could then run an innocuous piece of software that would communicate with the driver and send the data to the USB device which would run the cheat software and do the mouse manipulation (and you would configure the devices from the gaming PC over the same USB interface). e: This could technically be detected because there is still software running on the user’s PC that the anti-cheat software could detect and a USB device that could, if the firmware is not properly flashed to a firmware pretending to be something innocuous (typically a NIC or Audio device).

    This let anybody willing to install a UEFI driver of unknown origin have access to DMA without needing to buy an expensive card. This is only possible on any game that doesn’t mandate Windows 11 and Secure Boot (though there was a recent exploit discovered with some motherboards [CVE-2025-11901, CVE-2025-14302, CVE-2025-14303 and CVE-2025-14304] that allowed an attacker to obtain DMA access prior to the IOMMU being properly initialized (which would restrict DMA access).

    This would allow an attacker to run software on a second PC that would use this lapse to inject a hacked UEFI driver via a hardware DMA device, then you could just send the memory data over USB to a second cheating device.


  • The class of hacks that use trained object detection networks (like YOLO) can be run on lightweight(-ish) hardware. It still needs to be able to run the object recognition loop quickly, the faster your hardware the less latency you will experience but it can work on Raspberry Pi.

    In order to get ESP/wallhacks, you need to be able to read the game memory on the gaming PC. While there are software ways to do this, they are all detectable (assuming they’re using Secure Boot to prevent UEFI cheats). The most reliable way is to use Direct Memory Access hardware to read the system memory via hardware without going through the operating system, which means that not even the kernel anticheats can see when this is happening.

    If you’re going to use ESP, you also need to be able to see the information. You could run a second monitor, but the preferred way is to use a fuser which merges two video streams, one from the game from the gaming PC and another from the PC rendering the ESP data (bounding boxes).

    Then you need some kind of hardware to receive the mouse input and pretend to be a mouse to the gaming PC. This can be something like a Raspberry Pi, but a product called Kmbox is purpose designed for it.

    The full hardware kit is probably around $300-400 (not counting the PC/Pi) and then you have to buy/subscribe to the software that actually runs the cheats.












  • The AI bubble is certainly going to burst at some point. Assuming manufacturers are ramping up production to profit off of the higher prices, the bubble will result in a glut of supply after demand collapses. So we’ll likely see a year or two of depressed electronics prices.

    On top of that, DDR5 is worth more than gold until DDR6 comes along and suddenly you have companies who own a significant percentage of the 2025 global production of RAM that want to purchase newer hardware. I doubt all of that RAM is going to be shredded, so we may have a thriving secondary market when that happens.

    It’ll suck for the next year or two, so get used to your current PC and pray that you don’t have a RAM failure.






  • I think that people are too enthralled with the current situation that’s centered around LLMs, the massive capital bubble and the secondary effects from the expansion of datacenter space (power, water, etc).

    You’re right that they do allow for the disruption of labor markets in fields that were not expecting computers to be able to do their job (to be fair to them, humanity has spent hundreds of millions of dollars designing various language processing software and been unable to engineer the software to do it effectively).

    I think that usually when people say ‘AI’ they mean ChatGPT or LLMs in general. The reason that LLMs are big is because neural networks require a huge amount of data to train and the largest data repository that we have (the Internet) is text, images and video… so it makes sense that the first impressive models were trained on text and images/video.

    The field of robotics hasn’t had access to a large public dataset to train large models on, so we don’t see large robotics models but they’re coming. You can already see it, compare robotic motion 4 years ago using a human engineered feedback control loop… the motions are accurate but they’re jerky and mechanical. Now look at the same company making a robot that uses a neural network trained on human kinematic data, that motion looks so natural that it breaks through the uncanny valley to me.

    This is just one company generating data using human models (which is very expensive) but this is the kind of thing that will be ubiquitous and cheap given enough time.

    This isn’t to mention the AlphaFold AI which learned how to fold proteins better than anything human engineered. Then, using a diffusion model (the same kind used in making pictures of shrimp jesus) another group was able to generate the RNA which would manufacture new novel proteins that fit a specific receptor. Proteins are important because essentially every kind of medication that we use has to interact with a protein-based receptor and the ability to create, visualize and test custom proteins in addition to the ability to write arbitrary mRNA (see, the mRNA COVID vaccine) is huge for computational protein design (responsible for the AIDS vaccines).

    LLMs and the capitalist bubble surrounding them is certainly an important topic, framing it as being ‘against AI’ creates an impression that AI technology has nothing positive to offer. This reduces the amount of people who study the topic or major in it in college. So in 10 years, we’ll have less machine learning specialists than other countries who are not drowning in this ‘AI bad’ meme.