Lvxferre [he/him]

I have two chimps within, Laziness and Hyperactivity. They smoke cigs, drink yerba, fling shit at each other, and devour the face of anyone who gets close to either.

They also devour my dreams.

  • 4 Posts
  • 562 Comments
Joined 2 years ago
cake
Cake day: January 12th, 2024

help-circle

  • In this video (Odysee link), someone asks X11 users why they’re still using it in 2025. The main answers were

    1. DE or WM doesn’t support Wayland, or its Wayland session is currently WIP.
    2. [lack of] support for certain graphic tablets and their features.
    3. old hardware. Specially old nVidia GPUs.
    4. [If I got this right] Some software expects to be able to dictate window position, and Wayland doesn’t let it to.
    5. OpenBSD.

    In the light of the above, I think GNOME’s decision to drop the X11 backend is a big “meh, who cares”. If you use GNOME you’re likely not in the first case; #2 and #3 boil down to hardware support, not something DE developers can interfere directly; I’m not sure on #4 and #5, however.


  • Before I even read the article, let me guess:

    1. it keeps Google under control of everything, giving it power to kick out competitors on a whim
    2. it claims it’s “to protect those disgusting pieces of shit called users from causing themselves harm”
    3. it claims Google did nothing wrong

    Now, reading the article…

    • “Google has denied any wrongdoing throughout the closely watched litigation.” - that’s #3 right off the bat
    • “Under the new proposal, Google would allow users to more easily download and install third-party app stores that meet new security and safety standards.” - who decides those standards? If Google itself, that’s #1
    • Sameer Samat, Google’s president of Android Ecosystem, said, opens new tab on Tuesday the proposed changes maintained user safety - #2.

    *Yawn*




  • TL;DR: don’t be a dumbarse like Lvxferre. Don’t waste your time reading this text; it is not worth it. It’s basically some guy building a prediction around a big assumption.

    The core claim of the article is that generative artificial¹ "intelligence"² in 2025 is roughly in the same situation as the internet in 1995. As in: back then it was impossible to predict how, and both optimists and pessimists were dead wrong on their predictions, and yet the internet did have a huge impact on our lives.

    In no moment he backs that core claim up. He takes it for granted. He assumes³ that genAI will revolutionise everything, internet style. Will it? I don’t know, you don’t, he doesn’t either - nobody knows, because it boils down to future events, and only a goddamn liar (no, worse - a moron) claims to know the future in this regard.

    And the fact he’s assuming is further reinforced by his claim at the end that “We’re early in the AI revolution.”.

    Then he spends the a good chunk of the text trying to predict the supply and demand effects of his certainty on jobs. His analysis is interesting, but at the end of the day it’s just a big red herring - it distracts the reader from the core claim he was supposed to back up, and failed to.

    Immediately afterwards, he does it again, now talking about bubbles. Same deal: interesting-ish analysis spoiled by the fact it’s a red herring, taking for granted a core claim that might be false.

    The Predictably Unpredictable Future

    Or: “The Moronic Oxymoron”.

    no one can predict with certainty what our AI future will look like. Not the tech CEOs, not the AI researchers, and certainly not some random guy pontificating on the internet. But whether we get the details right or not, our AI future is loading.

    You were so close, author. So fucking close. Then you dropped the ball by vomiting certainty one final time.

    1. I’m not sure if I should be adding quotation marks around that “artificial”; here’s some food for thought regarding that.
    2. The ones around “intelligence” stay, however. I’ll go further: I’m not wasting my time with anyone disingenuous (or moronic - same thing) enough to argue the current systems are intelligent, or babbling about definitions of intelligence.
    3. By “to assume”, in this context, I mean “to utter certainty on what one cannot reliably know”. Such as the future. Note it’s fairly distinct from “to hypothesise” (where one acknowledges a claim might be incorrect, but is still willing to play with it). Hypotheses are good, assumptions are trash.


  • [Skavau, on Piefed and Lemmy] They’re not free speech zones though. Assuming that’s what you want. Most instances will have specific rules.

    Even if the OOP wants that “freeze peach”, that boils down to “waaah! I want to scream sluurs! And how much of a social failure I am I hate marginalised groups!”, it would be perfectly possible to create a Piefed or Lemmy instance to host it. So even in that case the Fediverse would be an option…

    …as long as the person doesn’t feel entitled to be heard by people who don’t want to hear their shit, you know? Because, as soon as someone created such instance, most other instances would (IMO correctly) defederate it.




  • Disgusting and appalling. And you barely hear people in Europe and the Americas talking about this genocide; sadly it’s part of that “nobody cares about Africa, except when it benefits them” thing.

    Sudan’s army has meanwhile reportedly been supported by Egypt, Russia and Iran.

    The country is part of the so-called Quad of nations, alongside the United States, Saudi Arabia and Egypt, leading efforts to find a negotiated peace.

    Emphasis mine. If I had to guess, neither Putler’s nor Trumpler’s backyards is seeking to end the genocide, or actually help the local population.




  • I don’t want to be that guy, but… no, wait, I am that guy.

    No current model reasons. Not even “reasoning” models - it’s just yet another misleading analogy¹, to make you believe it has better capabilities than it does.

    At the end of the day, what they do is a more complex version of predicting what it should output for the next chunk of word, based on what is present in the data it processed (was “trained” with) plus some weighting². This is good enough to emulate reasoning in some cases, but it is still not reasoning.

    Some muppet might say “ackshyually emulating it and having it is the same thing lol”, or “I don’t know if I’m just emulating reasoning lmao”, as if the issue was just mental masturbation. Not really, it’s a practical matter - reasoning is a requirement to reliably reach correct conclusions based on correct premises, and the emulation is not perfect, so where the emulation breaks the results become unreliable. In other words: the model will babble nonsense³ where the emulation fails.

    For example, consider multiplications. If you correctly follow the reasoning behind multiplications, it doesn’t really matter if you’re multiplying numbers with two, 20 or even 2000 digits each - you’ll consistently reach the right result. However, if you’re simply emulating the reasoning behind multiplication, it’ll reach a point the multiplications start failing³.

    Now, check the article. It’s pretty much a generalisation of my example above; instead of talking about multiplications, it’s talking about reasoning in general, as applied to tasks such as graph connectivity, counting asteroids, and others.

    1. A few other of those misleading analogies: “learning”, “hallucination”, “attention”, “semantic supplement”. I’d argue even calling them “large language models” is an example of that. Or “large reasoning models”; frankly both are large token models.
    2. Before someone vomits an “ackshyually”: yes I know this is an oversimplification, but it’s accurate enough for the point I’m delivering.
    3. Yes, this means the “hallucination” problem is unsolvable.[/captain obvious] And, more on-topic: that while large token models might handle increasingly complex tasks, the limit will be always there, and it’ll demand more and more data to push that limit away.
    4. That’s exactly what you see ChatGPT and the likes doing; IIRC they start failing multiplications once the factors are 6+ digits large. Note Gemini “cheats” on that - it invokes Python to do this job, because the LLM itself cannot.

  • I’m not the only one, either. I think the only people left are those who see Nintendo as video-game iPhones and autopilot into a purchase, and the diehards who have dedicated Amiibo rooms.

    And even those might suffer some causalities, depending on how things go:

    • the ones treating games like luxury goods are a bit too susceptible towards popular attitude. If Nintendo goes from “wow, you got a Nintendo!” to “you got a Nintendo? Cringe. Even Twilight is a better love story.”, they’ll be quick to ditch it too.
    • diehard fans tolerate more abuse than reasonable fans, but that amount if not infinite. And Nintendo has been rather abusive when it comes to the Switch 2, including remote bricking it for spurious reasons.


  • Immediately bookmarked it. Way better than my current approaches:

    • if I care about the person, I mention a few experiences with LLMs: how ChatGPT constantly invents RimWorld mods that don’t exist, how Bard (now Gemini) told me potatoes are active and oranges are passive (because potatoes can roll and need to react to their environment), etc. Or internet lore, like “eat a rock per day” and “put glue on pizza”.
    • if I don’t care about the person*, I superficially agree with them. Then I mark them mentally as “braindead trash” and consider avoiding them as much as possible - because this is a symptom of worse character flaws, like being gullible.

    Small note:

    What kinds of things might they be good at? // Summarize this for me

    Kinda. It doesn’t really summarise texts; it picks chunks of them together. Sometimes changing meaning.

    In this aspect, LLMs are only good if you wouldn’t otherwise read the text in question, or if you’re looking for a specific topic.

    Note the pattern behind the other examples: things where there’s no harm if it’s wrong, because you’re checking it anyway.

    *EDIT: relevant to note that by default, I care about people. Until they give me signs I shouldn’t.


  • Rename USA in the maps to Northern Mexico. Keep calling it “Gulf of Mexico”. Problem solved.

    …on a more serious note, the article shows what’s up here:

    But it also proved to be an early test of how institutions would — or would not — stand up to unilateral presidential action without precedent. Google, Apple, and Microsoft all got in line. But news organizations, for the most part, did not.

    Or perhaps it’s that the information ecosystem still has a little institutional wherewithal. When Trump cast the AP out of the Oval Office — and took unprecedented control of who gets to cover the president when — the White House press corps didn’t exactly rise up united in rebellion.

    Those were the main part, really: the tiny-dicked kinglet was testing the waters on which entities would fight back, and which would do as told as a sign of loyalty.

    As a silver lining I’m really glad to see the sheer amount of vitriol people are directing towards GAFAM.