

I am personally rather skeptical about the commercial viability of humanoid robots in 2026, but I suppose that we shall see.
Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.


I am personally rather skeptical about the commercial viability of humanoid robots in 2026, but I suppose that we shall see.


If the United States government wants to use ChatGPT on sensitive information, I’m pretty sure that it can come to some kind of contract with OpenAI to set up their own private cloud thing dedicated to that.
I get that maybe this guy just wanted some kind of one-off use, but then arrange to have something set up for that.
EDIT: To clarify, set up for that sort of thing, not this specific use. Like, have a way to throw up secure, temporary setups for particular users who just need one-off stuff for sensitive material.


I just am not sold that there’s enough of a market, not with the current games and current prices.
There are several different types of HMDs out there. I haven’t seen anyone really break them up into classes, but if I were to take a stab at it:
VR gaming googles. These focus on providing an expansive image that fills the peripheral vision, and cut one off from the world. The Valve Index would be an example.
AR goggles. I personally don’t like the term. It’s not that augmented reality isn’t a real thing, but that we don’t really have the software out there to do AR things, and so while theoretically these could be used for augmented reality, that’s not their actual, 2026 use case. But, since the industry uses it, I will. These tend to display an image covering part of one’s visual field which one can see around and maybe through. Xreal’s offerings are an example.
HUD glasses. These have a much more limited display, or maybe none at all. These are aimed at letting one record what one is looking at less-obtrusively, maybe throw up notifications from a phone silently, things like the Ray-Ban Meta.
Movie-viewers. These things are designed around isolation, but don’t need head-tracking. They may be fine with relatively-low resolution or sharpness. A Royole Moon, for example.
For me, the most-exciting prospect for HMDs is the idea of a monitor replacement. That is, I’d be most-interested in something that does basically what my existing displays do, but in a lower-power, more-portable, more-private form. If it can also do VR, that’d be frosting on the cake, but I’m really principally interested in something that can be a traditional monitor, but better.
For me, at least, none of the use cases for the above classes of HMDs are super-compelling.
For movie-viewing. It just isn’t that often that I feel that I need more isolation than I can already get to watch movies. A computer monitor in a dark room is just fine. I can also put things on a TV screen or a projector that I already have sitting around and I generally don’t bother to turn on. If I want to block out outside sound more, I might put on headphones, but I just don’t need more than that. Maybe for someone who is required to be in noisy, bright environments or something, but it just isn’t a real need for me.
For HUD glasses, I don’t really have a need for more notifications in my field of vision — I don’t need to give my phone a HUD.
AR could be interesting if the augmented reality software library actually existed, but in 2026, it really doesn’t. Today, AR glasses are mostly used, as best I can tell, as an attempt at a monitor replacement, but the angular pixel density on them is poor compared to conventional displays. Like, in terms of the actual data that I can shove into my eyeballs in the center of my visual field, which is what matters for things like text, I’m better off with conventional monitors in 2026.
VR gaming could be interesting, but the benefits just aren’t that massive for the games that I play. You get a wider field of view than a traditional display offers, the ability to use your head as an input for camera control. There are some genres that I think that it works well with today, like flight sims. If you were a really serious flight-simmer, I could see it making sense. But most genres just don’t benefit that much from it. Yeah, okay, you can play Tetris Effect: Connected in VR, but it doesn’t really change the game all that much.
A lot of the VR-enabled titles out there are not (understandably, given the size of the market) really principally aimed at taking advantage of the goggles. You’re basically getting a port of a game aimed at probably a keyboard and mouse, with some tradeoffs.
And for VR, one has to deal with more setup time, software and hardware issues, and the cost. I’m not terribly price sensitive on gaming compared to most, but if I’m getting a peripheral for, oh, say, $1k, I have to ask how seriously I’m going to play any of the games that I’m buying this hardware for. I have a HOTAS system with flight pedals; it mostly just gathers dust, because I don’t play many WW2 flight sims these days, and the flight sims out there today are mostly designed around thumbsticks. I don’t need to accumulate more dust-collectors like that. And with VR the hardware ages out pretty quickly. I can buy a conventional monitor today and it’ll still be more-or-less competitive for most uses probably ten or twenty years down the line. VR goggles? Not so much.
At least for me, the main things that I think that I’d actually get some good out of VR goggles on:
Vertical-orientation games. My current monitors are landscape aspect ratio, and don’t support rotating (though I imagine that there might be someone that makes a rotating VESA mount pivot, and I could probably use wlr-randr to make Wayland change the display orientation manually) Some games in the past in arcades had something like a 3:4 portrait mode aspect ratio. If you’re playing one of those, you could maybe get some extra vertical space. But unless I need the resolution or portability, I can likely achieve something like that by just moving my monitor closer while playing such a game.
Pinball sims, for the same reason.
There are a couple of VR-only games that I’d probably like to play (none very new).
Flight sims. I’m not really a super-hardcore flight simmer. But, sure, for WW2 flight sims or something like Elite: Dangerous, it’s probably nice.
I’d get a little more immersiveness out of some games that are VR-optional.
But…that’s just not that overwhelming a set of benefits to me.
Now, I am not everyone. Maybe other people value other things. And I do think that it’s possible to have a “killer app” for VR, some new game that really takes advantage of VR and is so utterly compelling that people feel that they’d just have to get VR goggles so as to not miss out. Something like what World of Warcraft did for MMO gaming, say. But the VR gaming effort has been going on for something like a decade now, and nothing like that has really turned up.


Have a limited attack surface will reduce exposure.
If, say, the only thing that you’re exposing is, oh, say, a Wireguard VPN, then unless there’s a misconfiguration or remotely-exploitable bug in Wireguard, then you’re fine regarding random people running exploit scanners.
I’m not too worried about stuff like (vanilla) Apache, OpenSSH, Wireguard, stuff like that, the “big” stuff that have a lot of eyes on them. I’d be a lot more dubious about niche stuff that some guy just threw together.
To put perspective on this, you gotta remember that most software that people run isn’t run in a sandbox. It can phone home. Games on Steam. If your Web browser has bugs, it’s got a lot of sites that might attack it. Plugins for that Web browser. Some guy’s open-source project. That’s a potential vector too. Sure, some random script kiddy running an exploit scanner is a potential risk, but my bet is that if you look at the actual number of compromises via that route, it’s probably rather lower than plain old malware.
It’s good to be aware of what you’re doing when you expose the Internet to something, but also to keep perspective. A lot of people out there run services exposed to the Internet every day; they need to do so to make things work.


I was commenting a year or so back on the decline of the titles-released-per-year of VR titles on Steam.
https://steamdb.info/stats/releases/?tagid=21978
That’s been going on for some time, not looking really healthy.
Plus, I mean, unless you’re using a Threadiverse host as your home instance, how often are you typing its name?
Having a hyphen is RFC-conformant:
1. A "name" (Net, Host, Gateway, or Domain name) is a text string up
to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus
sign (-), and period (.). Note that periods are only allowed when
they serve to delimit components of "domain style names". (See
RFC-921, "Domain Name System Implementation Schedule", for
background). No blank or space characters are permitted as part of a
name. No distinction is made between upper and lower case. The first
character must be an alpha character. The last character must not be
a minus sign or period. A host which serves as a GATEWAY should have
"-GATEWAY" or "-GW" as part of its name. Hosts which do not serve as
Internet gateways should not use "-GATEWAY" and "-GW" as part of
their names. A host which is a TAC should have "-TAC" as the last
part of its host name, if it is a DoD host. Single character names
or nicknames are not allowed.
The syntax of a legal Internet host name was specified in RFC-952
[DNS:4]. One aspect of host name syntax is hereby changed: the
restriction on the first character is relaxed to allow either a
letter or a digit. Host software MUST support this more liberal
syntax.
Host software MUST handle host names of up to 63 characters and
SHOULD handle host names of up to 255 characters.
I imagine so. I don’t use Boost, but I posted about the behavior to the Boost community, so hopefully they’ll get it straightened out.


I actually wondered about Panasonic when I saw the parent comment and checked. Didn’t comment then, but probably should have, if others had the same thought.
https://www.slashgear.com/1846438/panasonic-tvs-who-makes-where-built/
Panasonic used to manufacture televisions in its own factories, initially in Japan, and then in other regions such as China, India, and the Czech Republic. However, between 2015 and 2022, it closed its Panasonic-owned factories and now outsources all TV manufacturing to other companies. Its mid- to high-end ranges are currently made by Chinese electronics manufacturer TCL. TCL’s primary manufacturing plants are in China, Vietnam, and Mexico. Most Panasonic TVs sold to U.S. customers are made in manufacturing plants in Tijuana or Ixtapaluca in Mexico. TCL also has manufacturing and assembly plants in Australia, Brazil, India, and Pakistan.


You mean just the brand, or the manufacturing?
I mean, branding something is trivial.
But if you want to manufacture it in Europe, then you have to compete against companies who are going to be manufacturing in China, and manufacturing wages are going to be lower in China, so it’s going to be at a price disadvantage.
I was just commenting yesterday where some guy wanted to buy a keyboard out of the EU or Canada instead of a Unicomp keyboard because he was pissed at the US. He was asking about buying a Cherry keyboard. Cherry just shut down their production in Germany after cheaper Chinese competition clobbered 'em.
If you want to have stuff manufactured in Europe, you’ve got kinda limited options.
Get some kind of patriotic “buy European” thing going, where people are intrinsically willing to pay a premium for things made in Europe.
Ban imports. My guess is that in general, Europe will not do this unless they have some negative externality, like national security, associated with the import (think, say, Russian natural gas), since it’s economically-inefficient.
Leverage some kind of other comparative advantage. Like, okay. Maybe one can’t have competitive unskilled assembly line workers. But maybe if there’s really amazing, world-leading industrial automation, so that there’s virtually no human labor marginal cost involved, and one scales production way up, it’s possible to eliminate enough of the assembly line labor costs to be competitive.


Unless you have some really serious hardware, 24 billion parameters is probably the maximum that would be practical for self-hosting on a reasonable hobbyist set-up.
Eh…I don’t know if you’d call it “really serious hardware”, but when I picked up my 128GB Framework Desktop, it was $2k (without storage), and that box is often described as being aimed at the hobbyist AI market. That’s pricier than most video cards, but an AMD Radeon RX 7900 XTX GPU was north of $1k, an NVidia RTX 4090 was about $2k, and it looks like the NVidia RTX 5090 is presently something over $3k (and rising) on EBay, well over MSRP. None of those GPUs are dedicated hardware aimed at doing AI compute, just high-end cards aimed at playing games that people have used to do AI stuff on.
I think that the largest LLM I’ve run on the Framework Desktop was a 106 billion parameter GLM model at Q4_K_M quantization. It was certainly usable, and I wasn’t trying to squeeze as large a model as possible on the thing. I’m sure that one could run substantially-larger models.
EDIT: Also, some of the newer LLMs are MoE-based, and for those, it’s not necessarily unreasonable to offload expert layers to main memory. If a particular expert isn’t being used, it doesn’t need to live in VRAM. That relaxes some of the hardware requirements, from needing a ton of VRAM to just needing a fair bit of VRAM plus a ton of main memory.


That’s why they have the “Copilot PC” hardware requirement, because they’re using an NPU on the local machine.
searches
https://learn.microsoft.com/en-us/windows/ai/npu-devices/
Copilot+ PCs are a new class of Windows 11 hardware powered by a high-performance Neural Processing Unit (NPU) — a specialized computer chip for AI-intensive processes like real-time translations and image generation—that can perform more than 40 trillion operations per second (TOPS).
It’s not…terribly beefy. Like, I have a Framework Desktop with an APU and 128GB of memory that schlorps down 120W or something, substantially outdoes what you’re going to do on a laptop. And that in turn is weaker computationally than something like the big Nvidia hardware going into datacenters.
But it is doing local computation.


I’m kind of more-sympathetic to Microsoft than to some of the other companies involved.
Microsoft is trying to leverage the Windows platform that they control to do local LLM use. I’m not at all sure that there’s actually enough memory out there to do that, or that it’s cost-effective to put a ton of memory and compute capacity in everyone’s home rather than time-sharing hardware in datacenters. Nor am I sold that laptops — which many “Copilot PCs” are — are a fantastic place to be doing a lot of heavyweight parallel compute.
But…from a privacy standpoint, I kind of would like local LLMs to be at least available, even if they aren’t as affordable as cloud-based stuff. And at least Microsoft is at least supporting that route. A lot of companies are going to be oriented towards just doing AI stuff in the cloud.


You only need one piece of (timeless) advice regarding what to look for, really: if it looks too good to be true, it almost certainly is. Caveat emptor.
I mean…normally, yes, but because the situation has been changing so radically in such a short period of time, it probably is possible to get some bonkers deals in various niches, because the market hasn’t stabilized yet.
Like, a month and a half back, in early December, when prices had only been going up like crazy for a little while, I was posting some tiny retailers that still had RAM in stock at pre-price-increase rates that I could find on Google Shopping. IIRC the University of Virginia bookstore was one, as they didn’t check that purchasers were actually students. I warned that they’d probably be cleaned out as soon as scalpers got to them, and that if someone wanted memory, they should probably get it ASAP. Some days prior to that, there was a small PC parts store in Hawaii that had some (though that was out of stock by the next time I was looking and mentioned the bookstore).
That’s not to disagree with the point that @[email protected] is making, that this was awfully sketchy as a source, or your point that scavenging components off even a non-scam piece of secondhand non-functional hardware is risky. But in times of rapid change, it’s not impossible to find deals. In fact, it’s various parties doing so that cause prices to stabilize — anyone selling memory for way below market price is going to have scalpers grab it.


I’m not really a hardware person, but purely in terms of logic gates, making a memory circuit isn’t going to be hard. I mean, a lot of chips contain internal memory. I’m sure that anyone that can fabricate a chip can fabricate someone’s memory design that contains some amount of memory.
For PC use, there’s also going to be some interface hardware. Dunno how much sophistication is present there.
I’m assuming that the catch is that it’s not trivial to go out and make something competitive with what the PC memory manufacturers are making in price, density, and speed. Like, I don’t think that if you want to get a microcontroller with 32 kB of onboard memory, that it’s going to be a problem. But that doesn’t really replace the kind of stuff that these guys are making.
EDIT: The other big thing to keep in mind is that this is a short-term problem, even if it’s a big problem. I mean, the problem isn’t the supply of memory over the long term. The problem is the supply of memory over the next couple of years. You can’t just build a factory and hire a workforce and get production going the moment that someone decides that they want several times more memory than the world has been producing to date.
So what’s interesting is really going to be solutions that can produce memory in the near term. Like, I have no doubt that given years of time, someone could set up a new memory manufacturer and facilities. But to get (scaled-up) production in a year, say? Fewer options there.


One more thought — I don’t think that, even if someone is willing to do so as a stopgap until memory production ramps up adequately, that it’s possible to run Windows 11 on a DDR3-based system. DDR4 came out 12 years ago.


To copy my comment on the beehaw.org post, because they and lemmy.world aren’t federated:
This deal may make matters worse for more buyers, because PSMC used the Tongluo site to make legacy DRAM products – the kind of memory used in less advanced products. With the company now exiting the legacy chip biz, that memory will also become more scarce, giving the laws of supply and demand another moment in which to work their way on markets.
PSMC’s current DRAM capacity mainly relies on 25nm and 38nm nodes, which restricts DDR4 production to lower-density products.
I guess that that’s more DDR4 supply drying up. It’s going to be some very scarce years for memory until enough new production comes online.
EDIT:
https://tech.yahoo.com/computing/articles/amd-ryzen-chief-teases-return-201223682.html
AMD Ryzen chief teases return of older Zen 3 chips to fight soaring RAM prices — ‘That’s something we’re actively working on right now’
Restarting production of DDR4-capable hardware isn’t going to help nearly as much if nobody is producing DDR4. I guess that there’s still DDR4 memory to scavenge from existing computers.
EDIT2:
As RAM crisis intensifies, DDR3 motherboards are making an improbable comeback
Here’s one way to avoid paying an absolute fortune for RAM - forget DDR5 or even DDR4 memory, switch back to DDR3, as some folks are doing in China.
Now I regret throwing out the DDR3 memory that I have in the past.


If I had to guess, part of the problem is probably “bigger” hardware moving into their space.
Like, phones have a lot of limitations for playing “heavyweight”, PC-style games:
Small battery.
Small screen.
Limited ability to dissipate heat.
Really limited space and the hardware tradeoffs that come with that.
Touchscreen controls, even with accelerometer, aren’t ideal for a lot of games, especially PC or console ports.
For a lot of those, if you can manage to lug a laptop with you, you’re probably better off.
Then you have stuff like the Steam Deck and a bunch of similar larger-than-phone game-oriented platforms show up, and that eats even further into your market. Yeah, okay, a ROG Phone is smaller and lighter than a Steam Deck, but if you’re trying to deal with touchscreen controls by lugging along external control stuff, then you’re sacrificing some of that mobility:

I mean, I’m sure that there’s still a niche for heavyweight-game phone gaming, but it’s gonna have other parties eating away at the edges, narrowing it. You gotta want to play heavyweight games, not be willing to use larger-than-phone hardware, but spend a substantial amount of money on your phone (especially given the short EOL on the ROG phone) to have that ability. My guess is that some people who won’t use other hardware for gaming is because they have a phone and are price-sensitive enough to not want to get additional hardware platforms to just play games, so “users willing to spend a high premium on phone hardware to be able to game” may be a poor match to that market.


If oomkiller starts killing processes, then you’re running out of memory.
Well, you could want to not dig into swap.


This world is getting dumber and dumber.
Ehhh…I dunno.
Go back 20 years and we had similar articles, just about the Web, because it was new to a lot of people then.
searches
https://www.belfasttelegraph.co.uk/news/internet-killed-my-daughter/28397087.html
Internet killed my daughter
Were Simon and Natasha victims of the web?
Predators tell children how to kill themselves
And before that, I remember video games.
It happens periodically — something new shows up, and then you’ll have people concerned about any potential harm associated with it.
https://en.wikipedia.org/wiki/Moral_panic
A moral panic, also called a social panic, is a widespread feeling of fear that some evil person or thing threatens the values, interests, or well-being of a community or society.[1][2][3] It is “the process of arousing social concern over an issue”,[4] usually elicited by moral entrepreneurs and sensational mass media coverage, and exacerbated by politicians and lawmakers.[1][4] Moral panic can give rise to new laws aimed at controlling the community.[5]
Stanley Cohen, who developed the term, states that moral panic happens when “a condition, episode, person or group of persons emerges to become defined as a threat to societal values and interests”.[6] While the issues identified may be real, the claims “exaggerate the seriousness, extent, typicality and/or inevitability of harm”.[7] Moral panics are now studied in sociology and criminology, media studies, and cultural studies.[2][8] It is often academically considered irrational (see Cohen’s model of moral panic, below).
Examples of moral panic include the belief in widespread abduction of children by predatory pedophiles[9][10][11] and belief in ritual abuse of women and children by Satanic cults.[12] Some moral panics can become embedded in standard political discourse,[2] which include concepts such as the Red Scare[13] and terrorism.[14]
Media technologies
Main article: Media panic
The advent of any new medium of communication produces anxieties among those who deem themselves as protectors of childhood and culture. Their fears are often based on a lack of knowledge as to the actual capacities or usage of the medium. Moralizing organizations, such as those motivated by religion, commonly advocate censorship, while parents remain concerned.[8][40][41]
According to media studies professor Kirsten Drotner:[42]
[E]very time a new mass medium has entered the social scene, it has spurred public debates on social and cultural norms, debates that serve to reflect, negotiate and possibly revise these very norms.… In some cases, debate of a new medium brings about – indeed changes into – heated, emotional reactions … what may be defined as a media panic.
Recent manifestations of this kind of development include cyberbullying and sexting.[8]
I’m not sure that we’re doing better than people in the past did on this sort of thing, but I’m not sure that we’re doing worse, either.
I mean, human environments are intrinsically made for humanoids to navigate. Like, okay, we put stairs places, things like that. So in theory, yeah, a humanoid form makes sense if you want to stick robots in a human environment.
But in practice, I think that there are all kinds of problems to be solved with humans and robots interacting in the same space and getting robots to do human things. Even just basic safety stuff, much less being able to reasonably do general interactions in a human environment. Tesla spent a long time on FSD for its vehicles, and that’s a much-more-limited-scope problem.
Like, humanoid robots have been a thing in sci-fi for a long time, but I’m not sold that they’re a great near-term solution.
If you ever look at those Boston Dynamics demos, you’ll note that they do them in a (rather-scuffed-up) lab with safety glass and barriers and all that.
I’m not saying that it’s not possible to make a viable humanoid robot at some point. But I don’t think that the kind of thing that Musk has claimed it’ll be useful for:
…a sort of Rosie The Robot from The Jetsons, is likely going to be at all reasonable for quite some time.