Imagine an actor who never ages, never walks off set or demands a higher salary.

That’s the promise behind Tilly Norwood, a fully AI-generated “actress” currently being courted by Hollywood’s top talent agencies. Her synthetic presence has ignited a media firestorm, denounced as an existential threat to human performers by some and hailed as a breakthrough in digital creativity by others.

But beneath the headlines lies a deeper tension. The binaries used to debate Norwood — human versus machine, threat versus opportunity, good versus bad — flatten complex questions of art, justice and creative power into soundbites.

The question isn’t whether the future will be synthetic; it already is. Our challenge now is to ensure that it is also meaningfully human.

All agree Tilly isn’t human

Ironically, at the centre of this polarizing debate is a rare moment of agreement: all sides acknowledge that Tilly is not human.

Her creator, Eline Van der Velden, the CEO of AI production company Particle6, insists that Norwood was never meant to replace a real actor. Critics agree, albeit in protest. SAG-AFTRA, the union representing actors in the U.S., responded with:

“It’s a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation. It has no life experience to draw from, no emotion, and from what we’ve seen, audiences aren’t interested in watching computer-generated content untethered from the human experience.”

Their position is rooted in recent history: In 2023, actors went on strike over AI. The resulting agreement secured protections around consent and compensation.

So if both sides insist Tilly isn’t human, the controversy, then, isn’t just about what Tilly is, it’s about what she represents.

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    4 days ago

    I would be shocked if any diffusion model could do that based on a description. Most can’t overfill a wine glass.

    Rendering over someone demonstrating the movement, as video-to-video, is obviously easier than firing up Blender. But: that’s distant from any dream of treating the program like an actress. Each model’s understanding is shallow and opinionated. You cannot rely on text instructions.

    The practical magic from video models, for the immediate future, is that your video input can be real half-assed. Two stand-ins can play a whole cast, one interaction at a time. Or a blurry pre-vis in Blender can go straight to a finished shot. At no point will current technologies be more than loose control of a cartoon character, because to these models, everything is a cartoon character. It doesn’t know the difference between an actor and a render. It just knows shinier examples with pinchier proportions move faster.

    • CerebralHawks@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      I really don’t know what we can do with AI today. I do know that what we can do today seemed a distant dream not too long ago. It’s moving fast and I can’t imagine how far along it’ll be in one year, or even five.

      • mindbleach@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        … you should probably check, before you go selling the what-ifs.

        Diffusion is a denoising algorithm. It’s just powerful enough that “noise” can mean, all the parts that don’t look like Shrek eating ramen. Show it a blank page and it’ll squint until it sees that. It’s pretty good at finding Shrek. It’s so-so at finding “eating.” You’re better-off starting from rough approximation, like video of a guy eating ramen. And it probably doesn’t hurt if he’s painted green.