• 0 Posts
  • 304 Comments
Joined 2 years ago
cake
Cake day: July 14th, 2023

help-circle
  • Yeah, we need to be careful about distinguishing policy objectives from policy language.

    “Hold megacorps responsible for harmful algorithms” is a good policy objective.

    How we hold them responsible is an open question. Legal recourse is just one option. And it’s an option that risks collateral damage.

    But why are they able to profit from harmful products in the first place? Lack of meaningful competition.

    It really all comes back to the enshittification thesis. Unless we force these firms to open themselves up to competition, they have no reason to stop abusing their customers.

    “We’ll get sued” gives them a reason. “They’ll switch to a competitor’s service” also gives them a reason, and one they’re more likely to respect — if they see it as a real possibility.


  • It’s pretty apt, honestly. It’s just the next step of the climate-denial and cancer-denial playbooks.

    We know that the tech bosses are aware of how harmful their stuff is. We know that they hire experts specifically to make their stuff as addictive as possible. We know they bribe the hell out of politicians to avoid getting regulated. We know they cook their books and launder money like crazy. We know their financial models are predicated on getting everyone to use an ever-increasing dose of their stuff. We know that people suffer horrific conditions to help build their devices and moderate their content cesspools.

    It may seem crass to compare tech bosses to narco kingpins. But that’s because their methods are crass. They want to seem sophisticated and unique. But they’re not.










  • The seal looks like this:

    Code completion is probably a gray area.

    Those models generally have much smaller context windows, so the energy concern isn’t quite as extreme.

    You could also reasonably make a claim that the model is legally in the clear as far as licensing, if the training data was entirely open source (non-attribution, non-share-alike, and commercial-allowed) licensed code. (A big “if”)

    All of that to say: I don’t think I would label code-completion-using anti-AI devs as hypocrites. I think the general sentiment is less “what the technology does” and more “who it does it to”. Code completion, for the most part, isn’t deskilling labor, or turning experts into chatbot-wrangling accountability sinks.

    Like, I don’t think the Luddites would’ve had a problem with an artisan using a knitting frame in their own home. They were too busy fighting against factories locking children inside for 18-hour shifts, getting maimed by the machines or dying trapped in a fire. It was never the technology itself, but the social order that was imposed through the technology.



  • For a majority of men, probably, but not an overwhelming majority. Which still leaves a ton of people you could be compatible with.

    Don’t overthink it and try to be something you’re not. Just take your time, get to know people, be curious and honest. Stay true to yourself. Don’t apologize and adapt just because you assume you have to.

    You’re not trying to date everyone, just the right one. So why bother with what the rest think?

    You’ll find someone that “just works” with who you already are. When you do, your dynamic will come naturally as a result of your unique relationship, and it won’t be precisely the same as any timeshare sex model you might have tried to plan ahead on Lemmy.







  • only a tool

    “The essence of technology is by no means anything technological”

    Every tool contains within it a philosophy — a particular way of seeing the world.

    But especially digital technologies… they give the developer the ability to embed their values into the tools. Like, is DoorDash just a tool?