

Probably, given that LLMs only exist in the domain of language, still interesting that they seem to have a “conceptual” systems that is commonly shared between languages.
Probably, given that LLMs only exist in the domain of language, still interesting that they seem to have a “conceptual” systems that is commonly shared between languages.
Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.
Interesting that…
Anthropic also found, among other things, that Claude “sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought’.”
Translations apps would be the main one for LLM tech, LLMs largely came out of google’s research into machine translation.
No you’re not going crazy, you just understand economics and trade more than the President of the USA.
I wouldnt trust the words of a Palantir exec if they said the sky was blue, but even accepting what they say, its just that the hamas attacks gave the ban impetus to move forwards. By his own words the ban already had bipartesan support and executive approval before that.
The headline that it was “about” Isreal rather than China is a massive reach.
Its a very difficult subject, both sides have merit. I can see the “CSAM created without abuse could be used in treatment/management of people with these horrible urges” but I can also see “Allowing people to create CSAM could normalise it and lead to more actual abuse”.
Sadly its incredibly difficult for academics to study this subject and see which of those two is more prevalent.
Obviously its important, but pretending its not political doesnt make any sense. If a community doesnt want to discuss politics (and as far as I’ve seen the OP didnt say which community this was in) then its a reasonable post to remove.
It does not, unless you run weights that someone else has modified to remove the baked in censorship. If you run the unmodified weights released by deepseek it will refuse to answer most things that the CCP dont like being discussed.
Of course its political, what else would it be? You are talking about peoples rights (a political concept) being breached by an administration (poltical) using an arm of the government (political) as a paramilitary force (political).
Perhaps, I think its more likely that active moderation is the cause of that rather than word lists that let p!ss, pi$s and pιss through when trying to block piss.
The Scunthorpe problem is hard, and any simple blacklist method is bound to give both false positives and false negatives.
Not the parent, but LLMs dont solve anything, they allow more work with less effort expended in some spaces. Just as horse drawn plough didnt solve any problem that couldnt be solved by people tilling the earth by hand.
As an example my partner is an academic, the first step on working on a project is often doing a literature search of existing publications. This can be a long process and even more so if you are moving outside of your typical field into something adjacent (you have to learn what excatly you are looking for). I tried setting up a local hosted LLM powered research tool that you can ask it a question and it goes away, searches arxiv for relevant papers, refines its search query based on the abstracts it got back and iterates. At the end you get summaries of what it thinks is the current SotA for the asked question along with a list of links to papers that it thinks are relevant.
Its not perfect as you’d expect but it turns a minute typing out a well thought question into hours worth of head start into getting into the research surrounding your question (and does it all without sending any data to OpenAI et al). That getting you over the initial hump of not knowing exactly where to start is where I see a lot of the value of LLMs.
Yeah, fair enough, I was refering to posts and comments not other metadata because that isnt publicly available just as a get request (as far as I’m aware)
Everything on the Fediverse is almost certainly scraped, and will be repeatedly. You cant “protect” content that is freely available on a public website.
So if I modify an LLM to have true randomness embedded within it (e.g. using a true random number generator based on radioactive decay ) does that then have free will?
If viruses have free will when they are machines made out of rna which just inject code into other cells to make copies of themselves then the concept is meaningless (and also applies to computer programs far simpler than llms).
So where does it end? Slugs, mites, krill, bacteria, viruses? How do you draw a line that says free will this side of the line, just mechanics and random chance this side of the line?
I just dont find it a particularly useful concept.
There’s a vast gulf between automated moderation systems deleting posts and calling the cops on someone.
I don’t think that’s really a fair comparison, babies exist with images and sounds for over a year before they begin to learn language, so it would make sense that they begin to understand the world in non-linguistic terms and then apply language to that. LLMs only exist in relation to language so couldnt understand a concept separately to language, it would be like asking a person to conceptualise radio waves prior to having heard about them.