“Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95,” the MIT article explains."
That is precisrly how I do math. Feel a little targeted that they called this odd.
But here’s the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.
This is not surprising. LLMs are not designed to have any introspection capabilities.
Introspection could probably be tacked onto existing architectures in a few different ways, but as far as I know nobody’s done it yet. It will be interesting to see how that might change LLM behavior.
To understand what’s actually happening, Anthropic’s researchers developed a new technique, called circuit tracing, to track the decision-making processes inside a large language model step-by-step. They then applied it to their own Claude 3.5 Haiku LLM.
Anthropic says its approach was inspired by the brain scanning techniques used in neuroscience and can identify components of the model that are active at different times. In other words, it’s a little like a brain scanner spotting which parts of the brain are firing during a cognitive process.
This is why LLMs are so patchy at math. (Image credit: Anthropic)
Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. “Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95,” the MIT article explains.
But here’s the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.
In other words, not only does the model use a very, very odd method to do the maths, you can’t trust its explanations as to what it has just done. That’s significant and shows that model outputs can not be relied upon when designing guardrails for AI. Their internal workings need to be understood, too.
Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.
“The planning thing in poems blew me away,” says Batson. “Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going.”
Anthropic discovered that their Claude LLM didn’t just predict the next word. (Image credit: Anthropic)
Anthropic also found, among other things, that Claude “sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought’.”
Anywho, there’s apparently a long way to go with this research. According to Anthropic, “it currently takes a few hours of human effort to understand the circuits we see, even on prompts with only tens of words.” And the research doesn’t explain how the structures inside LLMs are formed in the first place.
But it has shone a light on at least some parts of how these oddly mysterious AI beings—which we have created but don’t understand—actually work. And that has to be a good thing.
Is that a weird method of doing math?
I mean, if you give me something borderline nontrivial like, say 72 times 13, I will definitely do some similar stuff. “Well it’s more than 700 for sure, but it looks like less than a thousand. Three times seven is 21, so two hundred and ten, so it’s probably in the 900s. Two times 13 is 26, so if you add that to the 910 it’s probably 936, but I should check that in a calculator.”
Do you guys not do that? Is that a me thing?
I think what’s wild about it is that it really is surprisingly similar to how we actually think. It’s very different from how a computer (calculator) would calculate it.
So it’s not a strange method for humans but that’s what makes it so fascinating, no?
That’s what’s fascinating about how it does language in general.
The article is interesting in both the ways in which things are similar and the ways they’re different. The rough approximation thing isn’t that weird, but obviously any human would have self-awareness of how they did it and not accidentally lie about the method, especially when both methods yield the same result. It’s a weirdly effective, if accidental example of human-like reasoning versus human-like intelligence.
And, incidentally, of why AGI and/or ASI are probably much further away than the shills keep claiming.
72 * 10 + 70 * 3 + 2 * 3
That’s what I do in my head if I need an exact result. If I’m approximateing I’ll probably just do something like 70 * 15 which is much easier to compute (70 * 10 + 70 * 5 = 700 + 350 = 1050).
OK, I’ve been willing to just let the examples roll even though most people are just describing how they’d do the calculation, not a process of gradual approximation, which was supposed to be the point of the way the LLM does it…
…but this one got me.
Seriously, you think 70x5 is easier to compute than 70x3? Not only is that a harder one to get to for me in the notoriously unfriendly 7 times table, but it’s also further away from the correct answer and past the intuitive upper limit of 1000.
See, for me, it’s not that 7*5 is easier to compute than 7*3, it’s that 5*7 is easier to compute than 7*3.
I saw your other comment about 8’s, too, and I’ve always found those to be a pain, so I reverse them, if not outright convert them to arithmetic problems. 8x4 is some unknown value, but X*8 is always X*10-2X, although do have most of the multiplication tables memorized for lower values.
8*7 is an unknown number that only the wisest sages can compute, however.For me personally, anything times 5 can be reached by halving the number, then multiplying that number by 10.
Example: 66 x 5 = Y
-
(66/2) x (5x2) = Y
-
cancel out the division by creating equal multiplication in the other number
-
66/2 = 33
-
5x2 = 10
-
-
33 x 10 = Y
-
33 x 10 = 330
-
Y = 330
-
The 7 times table is unfriendly?
I love 7 timeses. If numbers were sentient, I think I could be friends with 7.
I’ve always hated it and eight. I can only remember the ones that are familiar at a glance from the reverse table and to this day I sometimes just sum up and down from those “anchor” references. They’re so weird and slippery.
Huh.
Going back to the “being friends” thing, I think you and I could be friends due to applying qualities to numbers; but I think it might be challenging because I find 7 and 8 to be two of the best. They’re quirky, but interesting.
Thank you for the insight.
This is pretty normal, in my opinion. Every time people complain about common core arithmetic there are dozens of us who come out of the woodwork to argue that the concepts being taught are important for deeper understanding of math, beyond just rote memorization of pencil and paper algorithms.
Rote memorization should be minimized in school curriculum
Nah I do similar stuff. I think very few people actually trace their own lines of thought, so they probably don’t realize this is how it often works.
Huh. I visualize a whiteboard in my head. Then I…do the math.
I’m also fairly certain I’m autistic, so… ¯\_(ツ)_/¯
I do much the same in my head.
Know what’s crazy? We sling bags of mulch, dirt and rocks onto customer vehicles every day. No one, neither coworkers nor customers, will do simple multiplication. Only the most advanced workers do it. No lie.
Customer wants 30 bags of mulch. I look at the given space:
“Let’s do 6 stacks of 5.”
Everyone proceeds to sling shit around in random piles and count as we go. And then someone loses track and has to shift shit around to check the count.
Well, I guess I do a bit of the same:) I do (70+2)(10+3) -> 700+210+20+6
I would do 720 + 3 * 70 + 3 * 2
Thanks
🙏
Thanks for copypasting here. I wonder if the “prediction” is not as expected only in that case, when making rhymes. I also notice that its way of counting feels interestingly not too different from how I count when I need to come up fast with an approximate sum.
Isn’t that the “new math” everyone was talking about?
This reminds me of learning a shortcut in math class but also knowing that the lesson didn’t cover that particular method. So, I use the shortcut to get the answer on a multiple choice question, but I use method from the lesson when asked to show my work. (e.g. Pascal’s Pyramid vs Binomial Expansion).
It might not seem like a shortcut for us, but something about this LLM’s training makes it easier to use heuristics. That’s actually a pretty big deal for a machine to choose fuzzy logic over algorithms when it knows that the teacher wants it to use the algorithm.
Rather than read PCGamer talk about Anthropic’s article you can just read it directly here. It’s a good read.
I think this comm is more suited for news articles talking about it, though I did post that link to [email protected] which I think would be a more suited comm for those who want to go more in-depth on it
this is one of the most interesting things about Llms that i have ever read
That bit about how it turns out they aren’t actually just predicting the next word is crazy and kinda blows the whole “It’s just a fancy text auto-complete” argument out of the water IMO
It really doesn’t. You’re just describing the “fancy” part of “fancy autocomplete.” No one was ever really suggesting that they only predict the next word. If that was the case they would just be autocomplete, nothing fancy about it.
What’s being conveyed by “fancy autocomplete” is that these models ultimately operate by combining the most statistically likely elements of their dataset, with some application of random noise. More noise creates more “creative” (meaning more random, less probable) outputs. They do not actually “think” as we understand thought. This can clearly be seen in the examples given in the article, especially to do with math. The model is throwing together elements that are statistically proximate to the prompt. It’s not actually applying a structured, logical method the way humans can be taught to.
Unfortunately, these articles are often written by people who don’t know enough to realize they’re missing important nuances.
Genuine question regarding the rhyme thing, it can be argued that “predicting backwards isn’t very different” but you can’t attribute generating the rhyme first to noise, right? So how does it “know” (for lack of a better word) to generate the rhyme first?
It already knows which words are, statistically, more commonly rhymed with each other. From the massive list of training poems. This is what the massive data sets are for. One of the interesting things is that it’s not predicting backwards, exactly. It’s actually mathematically converging on the response text to the prompt, all the words at the same time.
Predicting the next word vs predicting a word in the middle and then predicting backwards are not hugely different things. It’s still predicting parts of the passage based solely on other parts of the passage.
Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.
Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.
Interesting that…
Anthropic also found, among other things, that Claude “sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought’.”
Yeah I caught that too, I’d be curious to know more about what specifically they meant by that.
Being able to link all of the words that have a similar meaning, say, nearby, close, adjacent, proximal, side-by-side, etc and realize they all share something in common could be done in many ways. Some would require an abstract understanding of what spatial distance actually is, an understanding of physical reality. Others would not, one could simply make use of word adjacency, noticing that all of these words are frequently used alongside certain other words. This would not be abstract, it’d be more of a simple sum of clear correlations. You could call this mathematical framework a universal language if you wanted.
Ultimately, a person learns meaning and then applies language to it. When I’m a baby I see my mother, and know my mother is something that exists. Then I learn the word “mother” and apply it to her. The abstract comes first. Can an LLM do something similar despite having never seen anything that isn’t a word or number?
I don’t think that’s really a fair comparison, babies exist with images and sounds for over a year before they begin to learn language, so it would make sense that they begin to understand the world in non-linguistic terms and then apply language to that. LLMs only exist in relation to language so couldnt understand a concept separately to language, it would be like asking a person to conceptualise radio waves prior to having heard about them.
Exactly. It’s sort of like a massively scaled up example of the blind man and the elephant.
Yeah but I think this is still the same, just not a single language. It might think in some mix of languages (which you can actuaysee sometimes if you push certain LLMs to their limit and they start producing mixed language responses.)
But it still has limitations because of the structure in language. This is actually a thing that humans have as well, the limiting of abstract thought through internal monologue thinking
Probably, given that LLMs only exist in the domain of language, still interesting that they seem to have a “conceptual” systems that is commonly shared between languages.
I read an article that it can “think” in small chunks. They don’t know how much though. This was also months ago, it’s probably expanded by now.
anything that claims it “thinks” in any way I immediately dismiss as an advertisement of some sort. these models are doing very interesting things, but it is in no way “thinking” as a sentient mind does.
I wish I could find the article. It was researchers and they were freaked out just as much as anyone else. It’s like slightly over chance that it “thought,” not some huge revolutionary leap.
there has been a flooding of these articles. everyone wants to sell their llm as “the smartest one closest to a real human” even though the entire concept of calling them AI is a marketing misnomer
Maybe? Didn’t seem like a sales job at the time, more like a warning. You could be right though.
It doesn’t, who the hell cares if someone allowed it to break “predict whole text” into "predict part by part, and then “with rhyme, we start at the end”. Sounds like a naive (not as in “simplistic”, but as “most straightforward”) way to code this, so given the task to write an automatic poetry producer, I would start with something similar. The whole thing still stands as fancy auto-complete
It’s amazing that humans have coded a tool for which they have to afterwards write more tools for analyzing how it works.
That has always been the case. Even basic programs need debugging sometimes, so we developed debuggers.
The other day I asked an llm to create a partial number chart to help my son learn what numbers are next to each other. If I instructed it to do this using very detailed instructions it failed miserably every time. And sometimes when I even told it to correct specific things about its answer it still basically ignored me. The only way I could get it to do what I wanted consistently was to break the instructions down into small steps and tell it to show me its pr.ogress.
I’d be very interested to learn it’s “thought process” in each of those scenarios.
This is great stuff. If we can properly understand these “flows” of intelligence, we might be able to write optimized shortcuts for them, vastly improving performance.
The math example in particular is very interesting, and makes me wonder if we could splice a calculator into the model, basically doing “brain surgery” to short circuit the learned arithmetic process and replace it.
I think a lot of services are doing this behind the scenes already. Otherwise chatgpt would be getting basic arithmetic wrong a lot more considering the methods the article has shown it’s using.
Do you mean like us, using an external calculator instead of doing it in our brain?
That math process for adding the two numbers - there’s nothing wrong with it at all. Estimate the total and come up with a range. Determine exactly what the last digit is. In the example, there’s only one number in the range with 5 as the last digit. That must be the answer. Hell, I might even use that same method in my own head.
The poetry example, people use that one often enough, too. Come up with a couple of words you would have fun rhyming, and build the lines around those words. Nothing wrong with that, either.
These two processes are closer to “thought” than I previously imagined.
Well, it falls apart pretty easily. LLMs are notoriously bad at math. And even if it was accurate consistently, it’s not exactly efficient, when a calculator from the 80s can do the same thing.
We have setups where LLMs can call external functions, but I think it would be cool and useful to be able to replace certain internal processes.
As a side note though, while I don’t think that it’s a “true” thought process, I do think there’s a lot of similarity with LLMs and the human subconscious. A lot of LLM behaviour reminds me of split brain patients.
And as for the math aspect, it does seem like it does math very similarly to us. Studies show that we think of small numbers as discrete quantities, but big numbers in terms of relative size, which seems like exactly what this model is doing.
I just don’t think it’s a particularly good way of doing mental math. Natural intuition in humans and gradient descent in LLMs both seem to create layered heuristics that can become pretty much arbitrarily complex, but it still makes more sense to follow an exact algorithm for some things.
when a calculator from the 80s can do the same thing.
1970’s! The little blighters are even older than most people think.
Which is why I find it extra hilarious / extra infuriating that we’ve gone through all of these contortions and huge wastes of computing power and electricity to ultimately just make a computer worse at math.
Math is the one thing that computers are inherently good at. It’s what they’re for. Trying to use LLM’s to perform it halfassedly is a completely braindead endeavor.
How can i take an article that uses the word “anywho” seriously?