• Researchers demonstrate that misleading text in the real-world environment can hijack the decision-making of embodied AI systems without hacking their software.
  • Self-driving cars, autonomous robots and drones, and other AI systems that use cameras may be vulnerable to these attacks.
  • The study presents the first academic exploration of environmental indirect prompt injection attacks against embodied AI systems.
Photos

  • Zwuzelmaus@feddit.org
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 day ago

    the first academic exploration

    I have read about it, years ago. And there are jokes about it that are many years old.

    This one against speed cams, for example: