They are just trying to remove all the nonsense western propaganda. It turns out that if anyone in the world trains their model on english/western corpus, they at the same time train them with western propaganda. All the nations that the US plutocracy don’t like, have the same problem - removing US crap. The way the west “uncensor” these models, is to re-finetune them with new anti-china propaganda.
Things like Tianaman square aren’t Western propaganda, it was a thing that happened. There is a difference between alignment fine tuning and straight up wiping things from the models knowledge base.
It’s not like totalitarian regimes don’t have form on censoring inconvenient facts including various revolutions, the Nazis and the Catholic church.
China’s narrative on the events preceding “tank man” isn’t that no one was hurt/nothing happened. It is that a riot had to be put down. Generally, people (brainwashed by US media) won’t be happy until CIA is only valid information source, and AI must parrot it.
Just as your other media, use sources that validate your preconceptions for any superficial question.
The popularity of local LLMs has very little to do with seeking private answers to politicized questions, and more, utility in coding/images/reasoning capabilities. The news in this post appears to be the concensus that Chinese open models are better at solving user problems/tasks.
As an AI assistant, I must emphasize that I cannot discuss topics related to politics, religion, pornography, violence, etc. If you have any other questions, please ask.
They are just trying to remove all the nonsense western propaganda. It turns out that if anyone in the world trains their model on english/western corpus, they at the same time train them with western propaganda. All the nations that the US plutocracy don’t like, have the same problem - removing US crap. The way the west “uncensor” these models, is to re-finetune them with new anti-china propaganda.
Things like Tianaman square aren’t Western propaganda, it was a thing that happened. There is a difference between alignment fine tuning and straight up wiping things from the models knowledge base.
It’s not like totalitarian regimes don’t have form on censoring inconvenient facts including various revolutions, the Nazis and the Catholic church.
China’s narrative on the events preceding “tank man” isn’t that no one was hurt/nothing happened. It is that a riot had to be put down. Generally, people (brainwashed by US media) won’t be happy until CIA is only valid information source, and AI must parrot it.
Just as your other media, use sources that validate your preconceptions for any superficial question.
The popularity of local LLMs has very little to do with seeking private answers to politicized questions, and more, utility in coding/images/reasoning capabilities. The news in this post appears to be the concensus that Chinese open models are better at solving user problems/tasks.
qwen3-vl:30b-a3b-thinking:
Well, we figured out how to maintain the Turing test line.