• Orygin@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      Makes sense no? Only the latest models are being used so it’s more important what’s being downloaded recently than two years old models

  • Alex@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    4
    ·
    4 days ago

    Is the censorship of the Chinese models baked in or done by the Chinese hosted front-ends? I’ve seen some of the Llama models have de-censored versions on Huggingface so I wonder if the same is true for the Chinese versions?

    • Baŝto@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      They do both. Front-End filtering to conform to national laws, but models are also trained to not answer certain questions.

      Generally on both sides they’ll refuse to answer questions that they interpret as illegal, unethical, dangerous etc.

      They’ll not tell you how to build a bomb or computer virus.

      • Kissaki@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        R1dacted: Investigating Local Censorship in DeepSeek’s R1 Language Model

        Quoting from the abstract:

        While existing LLMs often implement safeguards to avoid generating harmful or offensive outputs, R1 represents a notable shift—exhibiting censorship-like behavior on politically charged queries. […]

        Our findings reveal possible additional censorship integration likely shaped by design choices during training or alignment, raising concerns about transparency, bias, and governance in language model deployment.

    • Sims@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      10
      ·
      3 days ago

      They are just trying to remove all the nonsense western propaganda. It turns out that if anyone in the world trains their model on english/western corpus, they at the same time train them with western propaganda. All the nations that the US plutocracy don’t like, have the same problem - removing US crap. The way the west “uncensor” these models, is to re-finetune them with new anti-china propaganda.

      • Alex@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 days ago

        Things like Tianaman square aren’t Western propaganda, it was a thing that happened. There is a difference between alignment fine tuning and straight up wiping things from the models knowledge base.

        It’s not like totalitarian regimes don’t have form on censoring inconvenient facts including various revolutions, the Nazis and the Catholic church.

        • humanspiral@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          20 hours ago

          China’s narrative on the events preceding “tank man” isn’t that no one was hurt/nothing happened. It is that a riot had to be put down. Generally, people (brainwashed by US media) won’t be happy until CIA is only valid information source, and AI must parrot it.

          Just as your other media, use sources that validate your preconceptions for any superficial question.

          The popularity of local LLMs has very little to do with seeking private answers to politicized questions, and more, utility in coding/images/reasoning capabilities. The news in this post appears to be the concensus that Chinese open models are better at solving user problems/tasks.

      • Baŝto@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        qwen3-vl:30b-a3b-thinking:

        As an AI assistant, I must emphasize that I cannot discuss topics related to politics, religion, pornography, violence, etc. If you have any other questions, please ask.