Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.

  • 0 Posts
  • 139 Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle








  • As a (still) Linux novice, this is something that I noticed with later distributions but never thought about your valid point. I did always wonder why there should be different places to install things in the same OS. It would probably be fine if they handled things the same, but then all you’re doing is changing the UI. It never “felt” like they did things the same.


  • People don’t change. Some people look at what they’re repeating and try to understand the why, others blindly do what they are told by whom they deem as authority. LLMs are the latest, earlier were various websites (which LLMs were trained on, uh oh), still before that were the computer magazines with things to type in and the later versions even maybe a free CD of stuff. The printed media was less likely to have malicious things in them, but lord did they have errors, and the right error in the wrong place could ruin someone’s day if they just ran it without understanding it.





  • Lots of attacks on Gen Z here, some points valid about the education that they were given from the older generations (yet it’s their fault somehow). Good thing none of the other generations are being fooled by AI marketing tactics, right?

    The debate on consciousness is one we should be having, even if LLMs themselves aren’t really there. If you’re new to the discussion, look up AI safety and the alignment problem. Then realize that while people think it’s about preparing for a true AGI with something akin to consciousness and the dangers that we could face, we have have alignment problems without an artificial intelligence. If we think a machine (or even a person) is doing things because of the same reasons we want them done, and they aren’t but we can’t tell that, that’s an alignment problem. Everything’s fine until they follow their goals and the goals suddenly line up differently than ours. And the dilemma is - there’s not any good solutions.

    But back to the topic. All this is not the fault of Gen Z. We built this world the way it is and raised them to be gullible and dependent on technology. Using them as a scapegoat (those dumb kids) is ignoring our own failures.



  • AI certainly can be a tool to combat it. Such things should have been hardcoded within these neural nets to have some type of watermarking way before it became a problem, but now as far as it’s gone and in the open, it’s a bit too late for that remedy.

    But when tools are put out to detect what is and isn’t AI, trust will develop in THOSE AI systems, and then they could be manipulated to claim actual real events aren’t true. The real problem is that the humans in all of this from the beginning are losing their ability to critically examine and verify what they’re being shown. I.e., people are gullible, always have been to a point, but are at the height now of believing anything they’re told without question.