

Honestly I don’t see any reason to play any game without enhancement mods aside from not wanting to set them up unless you’re some sort of game historian.
I’m an anarchocommunist, all states are evil.
Your local herpetology guy.
Feel free to AMA about picking a pet/reptiles in general, I have a lot of recommendations for that!
Honestly I don’t see any reason to play any game without enhancement mods aside from not wanting to set them up unless you’re some sort of game historian.
It’s even a work pc, there’s a thread on the microsoft forums detailing how common the problem is.
I have had to use windows for like the last two weeks and the taskbar crashes and freezes constantly so I put a bat file on my desktop that kills and reopens explorer.exe also if my bluetooth headphones disconnect while my mic is muted it refuses to unmute… I have to reboot. This is what people say is a “it just works” experience.
Why not pig blood, then?
Mate and xfce are not on wayland yet, so yes, and the biggest things missing are accessibility protocols and xdotool functionality.
That’s not the only way to make meaningful change, getting people to give up on llms would also be meaningful change. This does very little for anyone who isn’t apple.
Meaningful change is not happening because of this paper, either, I don’t know why you’re playing semantic games with me though.
It does need to do that to meaningfully change anything, however.
that’s very true, I’m just saying this paper did not eliminate the possibility and is thus not as significant as it sounds. If they had accomplished that, the bubble would collapse, this will not meaningfully change anything, however.
also, it’s not as unreasonable as that because these are automatically assembled bundles of simulated neurons.
It is, but this did not prove all architectures cannot reason, nor did it prove that all sets of weights cannot reason.
essentially they did not prove the issue is fundamental. And they have a pretty similar architecture, they’re all transformers trained in a similar way. I would not say they have different architectures.
those particular models. It does not prove the architecture doesn’t allow it at all. It’s still possible that this is solvable with a different training technique, and none of those are using the right one. that’s what they need to prove wrong.
this proves the issue is widespread, not fundamental.
That indicates that this particular model does not follow instructions, not that it is architecturally fundamentally incapable.
I think it’s important to note (i’m not an llm I know that phrase triggers you to assume I am) that they haven’t proven this as an inherent architectural issue, which I think would be the next step to the assertion.
do we know that they don’t and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don’t? That’s the big question that needs answered. It’s still possible that we just haven’t properly incentivized reason over memorization during training.
if someone can objectively answer “no” to that, the bubble collapses.
Oh nothing important, I’m a long way away from home and won’t be home for quite a while, meaning I can’t test myself and am curious of the status of wine-wayland.
Thanks that’s helpful.
Are you using -git?
Thank you I really appreciate your kind words.
Fedora is not preferred because there are legal issues surrounding patents, this makes it so that if you want to, for example, watch a twitch stream… it just won’t work.
bazzite and aurora have fixes for this built in, which is why I recommend them over raw fedora.
You probably could with a phone