I have been thinking a lot about digital sovereignty lately and how quickly the internet is turning into a weird blend of surreal slop and centralized control. It feels like we are losing the ability to tell what is real because of how easy it is for trillionaire tech companies to flood our feeds with whatever they want.

Specifically I am curious about what I call “kirkification” which is the way these tools make it trivial to warp a person’s digital identity into a caricature. It starts with a joke or a face swap but it ends with people losing control over how they are perceived online.

If we want to protect ourselves and our local communities from being manipulated by these black box models how do we actually do it?

I want to know if anyone here has tried moving away from the cloud toward sovereign compute. Is hosting our own communication and media solutions actually a viable way to starve these massive models of our data? Can a small town actually manage its own digital utility instead of just being a data farm for big tech?

Also how do we even explain this to normal people who are not extremely online? How can we help neighbors or the elderly recognize when they are being nudged by an algorithm or seeing a digital caricature?

It seems like we should be aiming for a world of a million millionaires rather than just a room full of trillionaires but the technical hurdles like isp throttling and protocol issues make that bridge hard to build.

Has anyone here successfully implemented local first solutions that reduced their reliance on big tech ai? I am looking for ways to foster cognitive immunity and keep our data grounded in meatspace.

  • Shrouded0603@feddit.org
    link
    fedilink
    arrow-up
    7
    ·
    2 days ago

    Not a fan of slop but I do think its funny how kirkification actually seems to poison AI data from what I heard

    • h333d@lemmy.worldOP
      link
      fedilink
      arrow-up
      15
      ·
      2 days ago

      Lmao yeah there’s a beautiful irony - the slop machine is eating itself. Models trained on synthetic data degrade over time, what researchers call “model collapse” or “Habsburg AI.” Each generation loses fidelity like photocopies of photocopies. Kirkification specifically floods datasets with corrupted representations. When the model can’t distinguish real images from AI-generated variations, its accuracy breaks down. You’re injecting noise at scale. This is accidentally accelerationist - the error becomes the virus. The machine chokes on its own output. Tech companies are terrified, desperately trying to watermark and detect synthetic content, but it’s too late. How much of Reddit’s “authentic conversation” sold to Google is actually ChatGPT from 2023? It won’t stop slop generation, but it might render the whole system useless enough that people abandon it. Strategic failure at scale. Kind of poetic honestly.

      • brownmustardminion@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        16 hours ago

        AI is the poetic culmination of where society has been heading for decades.

        A photocopy of a photocopy.

        AI is literally an acceleration of Jean Baudrillard’s theories on modern culture.

        Not that I think that excuses it. If anything it’s more depressing.

      • TranquilTurbulence@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        24 hours ago

        Same goes for various tech articles too. You can taste the GPT while reading them.

        Who knows how many hallucinations are now spread publicly and fed to the next generation of LLMs as facts. I have a feeling that factual accuracy of the output is only going to go down as more and more of the training data contains serious mistakes.