• 0 Posts
  • 1.33K Comments
Joined 11 months ago
cake
Cake day: February 10th, 2025

help-circle


  • You can make it effectively invisible if you print the noise in ink only visible to UV cameras and even if you use black, the individual features are smaller than a fingernail so it would be hard to see.

    The law makes it illegal to put anything on the plate at all, here’s an example from FL:

    A person may not alter the original appearance of a vehicle registration certificate, license plate, temporary license plate, mobile home sticker, or validation sticker issued for and assigned to a motor vehicle or mobile home, whether by mutilation, alteration, defacement, or change of color or in any other manner. A person may not apply or attach a substance, reflective matter, illuminated device, spray, coating, covering, or other material onto or around any license plate which interferes with the legibility, angular visibility, or detectability of any feature or detail on the license plate or interferes with the ability to record any feature or detail on the license plate. A person who knowingly violates this section commits a misdemeanor of the second degree,

    It would be hard to disrupt the OCR from outside of the plate area.

    You could break the segmentation, the process where it draws a box around your plate and sends the image inside of the box to be OCRd by making every other surface of your vehicle detect as a license plate using the same invisible marks. I imagine you could also print a bumper sticker with noise to look maximally like a license plate and put it near your real plate to achieve the same outcome.

    If you wanted more active measures you could use high lumen UV floodlights next to your plate, it would overload the sensors so they couldn’t get an image at all. The light would be invisible to human eyes but blinding to anyone using a UV sensitive device. I believe this is fine in any state as most states only restrict your ability to install blue lights to avoid confusion with emergency services.






  • Adversarial noise a fun topic and a DIY AI thing you can do to familiarize yourself with the local-hosting side of things. Image generating networks are lightweight compared to LLMs and are able to be run on a moderately powerful, NVIDIA, gaming PC (most of my work is done on a 3080).

    LLM poisoning can also be done if you can insert poisoned text into their training set. An example method would be detecting AI scrapers on your server and sending them poisoned instead of automatically blocking them. Poison Fountain makes this very easy by supplying pre-poisoned data.

    Here is the same kind of training data poisoning attack, but for images that was made by the researchers of University of Chicago into a simple windows application: https://nightshade.cs.uchicago.edu/whatis.html

    Thanks to you comment I realized that my clipboard didn’t have the right link selected so I edited in the link to his github. ( https://github.com/bennjordan )









  • Yeah, and I spent the last week ice skating on Europa and coaching the Bears to victory.

    You can’t appeal to your own authority on the Internet, we’re all anonymous strangers who may as well have just popped into existence 15 seconds ago (in some cases, very literally).

    Don’t push ideas that look like they’re suggesting violence, if you’re on my side on that idea then you’ll have no trouble with my position of ‘violence bad’ even if it hurt your feelings a little bit.


  • I’m a psychic reindeer in Santa’s sleigh team and I always tell the truth or my nose grows 6 inches so you know that I’m not lying. We can be anything that we want to be on the Internet, it only takes a few twitches of the finger (or output vector, in most cases).

    There’s been a lot of ‘Fellow Leftists’ showing up on Lemmy recently and they all of the newcomers seem to be attempting to foment political violence and discourage real people from any other plans that they may be forming.

    The topic of the post is about some concrete action that is being suggested to improve the situation and your comment is ‘Nah fellow leftists, lets go do some sabotage and resistance instead’. No details or the suggestion of an actual proposed series of steps to be taken, just a general push in the direction of political violence with smidgeon of ‘Your idea is dumb’.

    Now, to me, I think that we don’t need that kind of person/bot in this community. If you want to be a person of action, then live up to your dreams in your own life.

    You’re in a social media space that we know is being monitored in an attempt to locate dissenters so that the administration can slap the ‘terrorist’ label on them. This is something anybody who is even remotely active in this space will understand.

    So, it immediately stands out as fake when someone claims to be and old veteran leftist (look at the account age and comments, no way to fake that guys!) and also thinks that spreading violent rhetoric on public social media is the move. The only people talking about violence on social media are soon to be imprisoned naive idiots and the bots/agents that influence them.

    Nobody take this bait.


  • If you’re interested in like this line of attack, you can also use similar techniques to defeat models that are trained to do object detection (like, for example, the ones that detect the location of your license plate) using adversarial noise attacks.

    The short version is, if you have a network that does detection, you can run inference with that network on images that have been altered by another network and have the second network use the confidence of the detection network in its loss function. The second model can be trained to create noise, which looks innocuous to human eyes, that maximally disrupts the segmentation/object detection process of the target/detection network.

    You could then print this noise on, say, a transparent overlay and put it on your license plate and automated license plate readers (ALPRs) would not be able to detect/read your plates. Note: Flock is aware of this technique and has lobbied state lawmakers to make putting anything on your plate to disrupt automated reading illegal in some places, check your laws.

    Benn Jordan has actually created and trained such a network video here: https://www.youtube.com/watch?v=Pp9MwZkHiMQ

    And also uploaded his code, PlateShapez to github: https://github.com/bennjordan

    In states where you cannot cover your license plate you’re not restricted from decorating the rest of your car. You could use a similar technique to create bumper stickers that are detected as license plates and place them all over your vehicle. Or, even, as Benn suggested, print them with UV ink so they’re invisible to humans but very visible to AI cameras who often use UV lamps to provide night vision/additional illumination.

    You could also, if you were so inclined, generate bumper stickers or a vinyl wrap which could make the detector be unable to even detect a car.

    Adversarial noise attacks are one of the bigger vulnerabilities of AI-based systems and they come in many flavors and can affect anything that uses a neural network.

    Another example (also from the video) is that you can encode voice commands in plain audio which, to the user is completely transparent but a device (like Alexa or Siri) will hear it as a specific command (“Hey Siri, unlock the front door”). Any user-generated audio that you encounter online can have this kind of attack encoded in it, the potential damage is pretty limited because AI assistants don’t really control critical functions in your life yet… but you should probably not let your assistant listen to TikTok if it can do more than control your home lighting.