• 24 Posts
  • 855 Comments
Joined 2 years ago
cake
Cake day: December 9th, 2023

help-circle










  • Embedded in the DSA is a theory about what X actually is. It treats platforms like X as communications infrastructure where speech happens, and the platform is conceptualised as a singular place, mostly neutral, with certain obligations for moderation and transparency attached. It views platforms as companies that are capitalistic in a textbook understanding of capitalistic companies: entities with the goal profit maximalisation, that are responsive to legal and economic incentives. This place can be regulated properly via transparency and via a set of complex process requirements. The platform companies that run these places will then implement these requirements as they are incentivised to do so via legal and economic pressures. The DSA’s approach follows from this understanding: establish transparency requirements, ensure researcher access, and prohibit deceptive design practices.

    Where the EC treats X as a communications network, Musk understands intuitively that X is something more than that, although he does not spell it out explicitly. Social networking platforms are collective sense making tools. Social networking platforms, whether that’s X, Instagram or TikTok, are platforms that we use to shape our common knowledge, and to determine which political opinions are currently in-vogue. These platforms are used to create a shared reality. This goes from how TikTok and Instagram influencers can push Dubai Chocolate into a global hype, to how the conversations on X shape what’s inside the political Overton window. The algorithmic feeds actively shape which voices get amplified, which narratives spread, and which facts feel established. Henry Farrell summarises the problem as: “The fundamental problem, as I see it, is not that social media misinforms individuals about what is true or untrue but that it creates publics with malformed collective understandings.” The fundamental power of platforms like X comes from its ownership over the tools to shape the collective understandings of the public, and allows them to be malformed in favour of fascism.

    Viewing platforms like X exclusively through the lens of a communications network, without taking into account how the platform affects collective knowledge, leads to two problems, both on the individual level and on the political level. This misunderstanding operates at both the individual and regulatory level.

    In a recent blog post, Mastodon calls for “social sovereignty”, as a response to how X can retaliate against government institutions. Mastodon understands social sovereignty here as public institutions taking control of their social media presence, mainly by running their own social networking servers on software like Mastodon. They mention explicitly that the EC already has their own Mastodon server, at ec.social-network.europa.eu, and invite other organisations to follow suit. That the EC already has their social sovereign presence, but only uses it for press releases without any of the Commissioners using the platform, further accentuates the large gap between the rhetoric and behaviour. Still, the infrastructure for alternative ways for the EC to take power already exists. Initiatives like Eurosky further indicate that the tools for the EC to shift power structures away from the platforms they’re trying to regulate are available.

    Fantastic article thank you for sharing!


  • “The current LLM tech landscape positions [neurodivergent people] to dominate,” according to the application. “Pattern recognition. Non-linear thinking. Hyperfocus. The cognitive traits that make the neurodivergent different are precisely what make them exceptional in an AI-driven world.”

    What a load of bullshit, LLMs will be used in a million ways to sideline neurodivergent people in society whether it be BS AI “help” for a neurodivergent student replacing a human teacher or job applications using AI to illegally screen and filter out neurodivergent people, this is a bad decade for neurodivergent people and it is likely only to get worse as societies collapse into bigotry from the endless stresses and catastrophies of runaway climate change.


  • Sure, but personal blogs, esoteric smaller websites and social media are where all the actual valuable information and human interaction happens and despite the awful reputation of them it is in fact traditional news media and associated websites/sources that have never been less trustable or useless despite the large role they still play.

    If companies fail to integrate the actual valuable parts to the internet in their scraping, the product they create will fail to be valuable past a certain point shrugs. If you cut out the periphery of the internet paradoxically what you accomplish is to cut out the essential core out of the internet.


  • In the realm of LLMs sabotage is multilayered, multidimensional and not something that can easily be identified quickly in a dataset. There will be no easy place to draw some line of “data is contaminated after this point and only established AIs are now trustable” as every dataset is going to require continual updating to stay relevant.

    I am not suggesting we need to sabotage all future endeavors for creating valid datasets for LLMs either, far from it, I am saying sabotage the ones that are stealing and using things you have made and written without your consent.


  • I made this point recently in a much more verbose form, but I want to reflect it briefly here, if you combine the vulnerability this article is talking about with the fact that large AI companies are most certainly stealing all the data they can and ignoring our demands to not do so the result is clear we have the opportunity to decisively poison future LLMs created by companies that refuse to follow the law or common decency with regards to privacy and ownership over the things we create with our own hands.

    Whether we are talking about social media, personal websites… whatever if what you are creating is connected to the internet AI companies will steal it, so take advantage of that and add a little poison in as a thank you for stealing your labor :)


  • Not arguing against trying to stop this as much as possible but I also recommend assuming your website will be scraped by bots and taking advantage of that to poison all the AI models you can. Feed in nonsense to the robots in places on your website that aren’t public facing to humans on your website, have 5% of your content be blatant nonsense that asserts obviously untrue statements confidently but in a way that doesn’t disguise the clear intent of purposeful absurdity to human viewers.

    See it as an opportunity not a vulnerability. Text is cheap, it doesn’t even really take up storage space on your website so why not?

    Be the change you want to see, from everything I have read it takes a shockingly small amount of “poisoned” information to undermine AI models, especially if multiple different non-consenting inputs to an AI model are participating in this strategy the impacts will grow exponentially as problem bits of data mix and mingle and become impossible to fully extract from bulk datasets scraped from the internet.




  • Just one small teeny-tiny request. The greatest gift you can give the Fediverse (Original: Lemmy.zip and Piefed.zip) isn’t money, praise, or interpretive dance (although we would absolutely accept the last one). Its participation.

    Participation awards are given out here, this is just that kind of degenerate place where that kind of stuff happens. I have seen the underground storehouses filled with participation trophies made of solid platinum and gold under some of the larger instances, it is staggering. We are the deplorably thanked for participating, witness us in our moral decay.