

No other than it’s geographically closer to my actual location so I thought the speed would be faster.
Freedom is the right to tell people what they do not want to hear.
No other than it’s geographically closer to my actual location so I thought the speed would be faster.
I agree with the sentiment but despite the vast amount of pushback and downvotes I get for voicing certain views it’s extremely rare for me to have my comments removed let alone be banned. Some individual communities might be worse than others and .ml communities you should avoid like the plague anyway but broadly speaking I’d still claim that it’s pretty difficult to get mods to take action against you as long as you’re being polite.
And as a self-employed I only know how much I earned at the end of the year which could be wildly different than the year before.
EU is about to do the exact same thing. Norway is the place to be. That’s where I went - at least according to my ip address.
I’m not sure I know what I want for life but I have a number of things I don’t want so I’m trying my best to steer clear of those.
FUD has nothing to do with what this is about.
And nothing of value was lost.
Sure, if privacy is worth nothing to you but I wouldn’t speak for the rest of the UK and EU.
My feed right now.
It’s actually the opposite of a very specific definition - it’s an extremely broad one. “AI” is the parent category that contains all the different subcategories, from the chess opponent on an old Atari console all the way up to a hypothetical Artificial Superintelligence, even though those systems couldn’t be more different from one another.
It’s a system designed to generate natural-sounding language, not to provide factual information. Complaining that it sometimes gets facts wrong is like saying a calculator is “stupid” because it can’t write text. How could it? That was never what it was built for. You’re expecting general intelligence from a narrowly intelligent system. That’s not a failure on the LLM’s part - it’s a failure of your expectations.
I don’t think you even know what you’re talking about.
You can define intelligence however you like, but if you come into a discussion using your own private definitions, all you get is people talking past each other and thinking they’re disagreeing when they’re not. Terms like this have a technical meaning for a reason. Sure, you can simplify things in a one-on-one conversation with someone who doesn’t know the jargon - but dragging those made-up definitions into an online discussion just muddies the water.
The correct term here is “AI,” and it doesn’t somehow skip over the word “artificial.” What exactly do you think AI stands for? The fact that normies don’t understand what AI actually means and assume it implies general intelligence doesn’t suddenly make LLMs “not AI” - it just means normies don’t know what they’re talking about either.
And for the record, the term is Artificial General Intelligence (AGI), not GAI.
Claims like this just create more confusion and lead to people saying things like “LLMs aren’t AI.”
LLMs are intelligent - just not in the way people think.
Their intelligence lies in their ability to generate natural-sounding language, and at that they’re extremely good. Expecting them to consistently output factual information isn’t a failure of the LLM - it’s a failure of the user’s expectations. LLMs are so good at generating text, and so often happen to be correct, that people start expecting general intelligence from them. But that’s never what they were designed to do.
There are plenty of similarities in the output of both the human brain and LLMs, but overall they’re very different. Unlike LLMs, the human brain is generally intelligent - it can adapt to a huge variety of cognitive tasks. LLMs, on the other hand, can only do one thing: generate language. It’s tempting to anthropomorphize systems like ChatGPT because of how competent they seem, but there’s no actual thinking going on. It’s just generating language based on patterns and probabilities.
I have next to zero urge to “keep up with the news.” I’m under no obligation to know what’s going on in the world at all times. If something is important, I’ll hear about it from somewhere anyway - and if I don’t hear about it, it probably wasn’t that important to begin with.
I’d argue the “optimal” amount of news is whatever’s left after you actively take steps to avoid most of it. Unfiltered news consumption in today’s environment is almost certainly way, way too much.
Large language models aren’t designed to be knowledge machines - they’re designed to generate natural-sounding language, nothing more. The fact that they ever get things right is just a byproduct of their training data containing a lot of correct information. These systems aren’t generally intelligent, and people need to stop treating them as if they are. Complaining that an LLM gives out wrong information isn’t a failure of the model itself - it’s a mismatch of expectations.
Well let’s hear some suggestions then.
I mean - it’s certainly possible, but you’d still be risking that 500k prize if you got caught.
And most people seem to tap out because of loneliness or starvation, so if you were going to cheat, you’d pretty much have to smuggle in either food or a better way of getting it - like a decent fishing rod and proper lures.
I’ve put things in my ass for no points. 1000 points sure sounds worth it.
They do regular health check-ins with the contestants, and if you’re not losing weight but there’s no footage of you catching food, they’re going to figure out pretty quickly that something’s up.
On top of that, the locations are chosen so that just hiking out to you with food would be a survival challenge in itself - and coming in by boat would almost certainly be noticed.
Interestingly, I’ve just been binge watching the show for the first time. I’m on season 5 currently.
I’ll make sure to try crying tomorrow in the hopes that my tools magically appear on the jobsite.