Or something that goes against the general opinions of the community? Vibes are the only benchmark that counts after all.

I tend to agree with the flow on most things but my thoughts that I’d consider going against the grain:

  • QwQ was think-slop and was never that good
  • Qwen3-32B is still SOTA for 32GB and under. I cannot get anything to reliably beat it despite shiny benchmarks
  • Deepseek is still open-weight SotA. I’ve really tried Kimi, GLM, and Qwen3’s larger variants but asking Deepseek still feels like asking the adult in the room. Caveat is GLM codes better
  • (proprietary bonus): Grok 4 handles news data better than GPT-5 or Gemini 2.5 and will always win if you ask it about something that happened that day.
  • Baŝto@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    I can’t buy salami in the supermarket and justify it by saying the cow is dead anyways

    That’s not comparable. You can’t compare software or even research with a physical object like that. You need a dead cow for salami, if demand increases they have to kill more cows. For these models the training already happened, how many people use it does not matter. It could influence whether or how much they train new models, but there is no direct relation. You can use that forever in it’s current state without any further training being necessary. I’d rather compare that with nazi experiments on human beings. Their human guinea pigs already suffered/died no matter whether you use the research derived from that or not. Doing new and proper training/research to get to a point where improper ones already got is somewhat pointless in this case, you just spend more resources.

    Though it makes sense to train new models on public domain and cc0 materials if you want end results that protect you better from getting sued because of copyright violations. There are platforms who banned AI generated graphics because of that.

    we still buy the graphics cards from Nvidia and we also set free some CO2 when doing inference

    But you don’t have to. I can run small models on my NITRO+ RX 580 with 8 GB VRAM, which I bought 7 years ago. It’s maybe not the best experience, but it certainly “works”. Last time our house used external electricity was 34h ago.

    Regarding RAG, I just hope it improves machine readability, which is also useful for non-AI applications. It just increases the pressure.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 days ago

      That’s not comparable. You can’t compare software or even research with a physical object like that. You need a dead cow for salami, if demand increases they have to kill more cows. For these models the training already happened, how many people use it does not matter.

      I really like to disagree here. Sure today’s cow is already dead and turned into sausage. But the pack of salami I buy this week is going to make the supermarket order another pack next week so what I’m really doing is have someone kill the next cow, or at least a tiny bit because I’m having just some slices and it’s the bigger picture and how I’m part of a large group of people creating the overall demand.

      And I think it’s at least quesionable if and how this translates. It’s still part of generating demand for AI. Sure, it’s kind of a byproduct but Meta directly invests additional research, alignment and preparation for these byproducts. And we got an entire ecosystem around it with Huggingface, CivitAI etc which cater to us, sometimes a sunstantial amount of their bussiness is the broader AI community and not just researchers. They provide us with datacenters for storage, bandwith and sometimes compute. So it’s certainly not nothing which gets added due to us. And despite it being immaterial, it has a proper effect on the world. It’s going to direct technology and society in some direction. Have real-world consequences when used. The pollution during the process of creating this non-physical product is real. And Meta seems to pay attention. At least that’s what I got from everything that happened with LLaMA 1 to today. I think if and how we use it is going to affect what they do with the next iteration. Similar to the salami pack analogy. Of course it’s a crude image. And we don’t really know what would happen if we did things differently. Maybe it’d be the same so it’s down to the more philosophical question of whether it’s ethical to benefit from things that have been made in an unethical way. Though this requires today’s use not to have any effect on future demand. Like the nazi example where me using medicine is not going to bring back nazi experiments in the future. And that’s not exactly the situation of AI. They’re still there and actively working on the next iteration. So the logic is more complicated than that.

      And I’m a bit wary because I have no clue about the true motive behind why Meta gifts us these things. It costs them money and they hand control to us, which isn’t exactly how large companies operate. My hunch is it’s mainly the usual war, they’re showing off and they accept cutting into their own business when it does more damage to OpenAI. And the Chinese are battling the USA… And we’re somewhere in the middle of it. Maybe we pick up the crumbs. Maybe we’re chess pieces and being used/exploited in some bigger corporate battles. And I don’t think we’re emancipated with AI, we don’t own the compute necessary to properly shape it, so we might be closer to the chess pieces. I don’t want to start any conspiracy theory but I think these dynamics are part of the picture. I (personally) don’t think it’s a general and easy answer to the question if it’s ethical to use these models. And reality is a bit messy.

      But you don’t have to. I can run small models on my NITRO+ RX 580 with 8 GB VRAM, which I bought 7 years ago. It’s maybe not the best experience, but it certainly “works”. Last time our house used external electricity was 34h ago.

      I think this is the common difference between theory and practice. What you do is commendable. In reality though, AI is in fact mostly made from coal and natural gas. And China and the US ramp up dirty fossil fuel electricity for AI. There’s a hype in small nuclear reactors to satisfy the urgend demand for more electricity and they’re a bit problematic with all the nuclear waste due to how nuclear power plants scale. So yes, I think we could do better. And we should. But that’s kind of a theoretical point unless we actually do it.

      it makes sense to train new models on public domain and cc0 materials

      Yes, I’d like to see this as well. I suppose it’s a long way from pirating books because they’re exempt from law with enough money and lawyers… to a proper consensual use.