• 0 Posts
  • 37 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle
  • Rights and freedoms are not unlimited. Freedom of speech ends at things that put people in danger (e.g. shouting fire in a crowded space). Guns are available pursuant to a well regulated militia (or should be, but let’s not open that can of worms).

    I’ll grant the proactive/reactive in a sort of way. If anyone (not only old people drink the fox news poison) starts up with some hyper racist shit, is restricting them not reactive to their emergent behavior? Would it be that big a stretch to codify the effects of propaganda as a sort of mental injury that needs treated? (Yes it would). Point is, at this point we’re splitting this hair rather fine and getting away from the important bits.

    So the real way to handle the propaganda is to punish fox and their ilk for being wildly irresponsible and setting up racist fascist bullshit. Corporations are much easier to regulate than individuals (theoretically). They should be sued into the ground for all they’ve done, but we live in an oligarchy so that’s not happening anytime soon. This shower thought emerges because free market capitalism refuses to have any morals whatsoever and people are desperate to stop the big companies from hurting everyone. And the thing that’s easiest for everyone to see is the people they love start repeating horrible things and being helpless to pull them out of the echo chamber.

    No, the shower thought isn’t good. It shouldn’t get that far. But right now, the only thing we can affect is the people next to us because the rich are never held accountable, so we’re stuck with bad and worse solutions.



  • See the trick is this: does “mentally fit” apply, even in the case of otherwise mentally healthy individuals? Propaganda can affect anyone and the less tech savvy more so. We have no issues with limiting the physical behavior of the people we care about when they cannot handle it anymore (e.g. we’ll drive grandpa around when he can technically do it, but shouldn’t). While some do kick a fuss about it (for understandable reasons) ultimately, society at large is pretty OK with the whole deal.

    Now we have them exposed to content that is arguably harmful to their health and the health of the people around them (e.g. voting). And this isn’t opinion stuff or debates. These are outright lies catered to them. There were no dogs being eaten in Springfield, and yet I could hear the old dudes at my gym discussing it while they walked the mezzanine. At what point does their right to play with their phone cede to their mental health? For anyone really? We cede rights to do things when they harm ourselves and others often. Why is this different?











  • I’ve got two for a pair of cats we adopted at the same time.

    First was Stusy (pronounced stu-c). He was named after a typo. My partner and I were planning a move and I accidentally misspelled study. We looked at it and decided it was a good cat name, which it was. He was the smartest cat we ever had. He died a couple years ago too young from what the vet said was likely genetic kidney problems.

    His brother, our scaredy cat, is Big O. At the cattery (our name for the local cat adoption place), he was the one that wanted nothing to do with us and so we clearly had to adopt him. Every time we pet him he vigorously cleaned that spot. I don’t remember what we were going to name him. The cattery named him Big O after the tire place where he was found. He was driven from one small town in Indiana to another, about 50 miles, before he was found in the engine compartment of someone’s car who stopped at Big O to check the meowing from the engine. He was Stusy’s best friend and while he’s still easy to startle, he lets us pet him in controlled conditions (usually us lying down and holding very still) and is the goofiest of his siblings when they’re playing.



  • You’re right in that the goal is problem solving, you’re wrong that inability to code isn’t a problem.

    AI can make a for loop and do common tasks but the moment you have something halfway novel to do, it has a habit of shitting itself and pretending that the feces is good code. And if you can’t read code, you can’t tell the shit from the stuff you want.

    It may be able to do it in the future but it can’t yet

    Source: data engineer who has fought his AI a time or two.


  • An elegant way to make someone feel ashamed for using many smart words, ha-ha.

    Unintentional I assure you.

    I think it’s some social mechanism making them choose a brute force solution first.

    I feel like it’s simpler than that. Ye olde “when all you have is a hammer, everything’s a nail”. Or in this case, when you’ve built the most complex hammer in history, you want everything to be a nail.

    So I’d say commercially they already are successful.

    Definitely. I’ll never write another cover letter. In their use-case, they’re solid.

    but I haven’t even finished my BS yet

    Currently working on my masters after being in industry for a decade. The paper is nice, but actually applying the knowledge is poorly taught (IMHO, YMMV) and being willing to learn independently has served me better than by BS in EE.


  • I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.

    Agreed. The techbros pretending that the stochastic parrots they’ve created are general AI annoys me to no end.

    While not as academically cogent as your response (totally not feeling inferior at the moment), it has struck me that LLMs would make a fantastic input/output to a greater system analogous to the Wernicke/Broca areas of the brain. It seems like they’re trying to get a parrot to swim by having it do literally everything. I suppose the thing that sticks in my craw is the giveaway that they’ve promised that this one technique (more or less, I know it’s more complicated than that) can do literally everything a human can, which should be an entire parade of red flags to anyone with a drop of knowledge of data science or fraud. I know that it’s supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we’re dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).

    Now that I’ve used a whole lot of cheap metaphor on someone who causally dropped ‘syllogism’ into a conversation, I’m feeling like a freshmen in a grad level class. I’ll admit I’m nowhere near up to date on specific models and bleeding edge techniques.


  • Ooooooh. Ok that makes sense.

    With that said, you might look at researchers using AI to come up with new useful ways to fold proteins and biology in general. The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.

    For qualitative examples we always have hallucinations and that’s a poorly understood mechanism that may well be able to create actual creativity. But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on. Though now it leads to “nothing new under the sun” so I’ll stop rambling now.