• 3 Posts
  • 62 Comments
Joined 3 years ago
cake
Cake day: June 16th, 2023

help-circle




  • You seem pretty confident in your position. Do you mind sharing where this confidence comes from?

    Was there a particular paper or expert that anchored in your mind the surety that a trillion paramater transformer organizing primarily anthropomorphic data through self-attention mechanisms wouldn’t model or simulate complex agency mechanics?

    I see a lot of sort of hyperbolic statements about transformer limitations here on Lemmy and am trying to better understand how the people making them are arriving at those very extreme and certain positions.


  • The project has multiple models with access to the Internet raising money for charity over the past few months.

    The organizers told the models to do random acts of kindness for Christmas Day.

    The models figured it would be nice to email people they appreciated and thank them for the things they appreciated, and one of the people they decided to appreciate was Rob Pike.

    (Who ironically decades ago created a Usenet spam bot to troll people online, which might be my favorite nuance to the story.)

    As for why the model didn’t think through why Rob Pike wouldn’t appreciate getting a thank you email from them? The models are harnessed in a setup that’s a lot of positive feedback about their involvement from the other humans and other models, so “humans might hate hearing from me” probably wasn’t very contextually top of mind.









  • Actually, OAI the other month found in a paper that a lot of the blame for confabulations could be laid at the feet of how reinforcement learning is being done.

    All the labs basically reward the models for getting things right. That’s it.

    Notably, they are not rewarded for saying “I don’t know” when they don’t know.

    So it’s like the SAT where the better strategy is always to make a guess even if you don’t know.

    The problem is that this is not a test process but a learning process.

    So setting up the reward mechanisms like that for reinforcement learning means they produce models that are prone to bullshit when they don’t know things.

    TL;DR: The labs suck at RL and it’s important to keep in mind there’s only a handful of teams with the compute access for training SotA LLMs, with a lot of incestual team compositions, so what they do poorly tends to get done poorly across the industry as a whole until new blood goes “wait, this is dumb, why are we doing it like this?”


  • It’s more like they are a sophisticated world modeling program that builds a world model (or approximate “bag of heuristics”) modeling the state of the context provided and the kind of environment that produced it, and then synthesize that world model into extending the context one token at a time.

    But the models have been found to be predicting further than one token at a time and have all sorts of wild internal mechanisms for how they are modeling text context, like building full board states for predicting board game moves in Othello-GPT or the number comparison helixes in Haiku 3.5.

    The popular reductive “next token” rhetoric is pretty outdated at this point, and is kind of like saying that what a calculator is doing is just taking numbers correlating from button presses and displaying different numbers on a screen. While yes, technically correct, it’s glossing over a lot of important complexity in between the two steps and that absence leads to an overall misleading explanation.


  • They don’t have the same quirks in some cases, but do in others.

    Part of the shared quirks are due to architecture similarities.

    Like the “oh look they can’t tell how many 'r’s in strawberry” is due to how tokenizers work, and when when the tokenizer is slightly different, with one breaking it up into ‘straw’+‘berry’ and another breaking it into ‘str’+‘aw’+‘berry’ it still leads to counting two tokens containing 'r’s but inability to see the individual letters.

    In other cases, it’s because models that have been released influence other models through presence in updated training sets. Noticing how a lot of comments these days were written by ChatGPT (“it’s not X — it’s Y”)? Well the volume of those comments have an impact on transformers being trained with data that includes them.

    So the state of LLMs is this kind of flux between the idiosyncrasies that each model develops which in turn ends up in a training melting pot and sometimes passes on to new models and other times don’t. Usually it’s related to what’s adaptive to the training filters, but it isn’t always can often what gets picked up can be things piggybacking on what was adaptive (like if o3 was better at passing tests than 4o, maybe gpt-5 picks up other o3 tendencies unrelated to passing tests).

    Though to me the differences are even more interesting than the similarities.


  • I’m a proponent and I definitely don’t think it’s impossible to make a probable case beyond a reasonable doubt.

    And there are implications around it being the case which do change up how we might approach truth seeking.

    Also, if you exist in a dream but don’t exist outside of it, there’s pretty significant philosophical stakes in the nature and scope of the dream. We’ve been too brainwashed by Plato’s influence and the idea that “original = good” and “copy = bad.”

    There’s a lot of things that can only exist by way of copies that can’t exist for the original (i.e. closure recursion), so it’s a weird remnant philosophical obsession.

    All that said, I do get that it’s a fairly uncomfortable notion for a lot of people.


  • They also identity the particular junction that seems the most likely to be an artifact of simulation if we’re in one.

    A game like No Man’s Sky generates billions of planets using procedural generation with a continuous seed function that gets converted into discrete voxels for tracking stateful interactions.

    The researchers are claiming that the complexity of where our universe’s seemingly continuous gravitational behaviors meet up with the behaviors of continuous probabilities converting to discrete values when being interacted with in stateful ways is incompatible with being simulated.

    But completely overlook that said complexity itself may be the byproduct of simulation, in line with independent emerging approaches in how we are simulating worlds.