

Right. There is probably a certain point where other hardware support is just a happy accident or miniscule effort. Its just there yet for them though it is getting close!


Right. There is probably a certain point where other hardware support is just a happy accident or miniscule effort. Its just there yet for them though it is getting close!


Why should they do it again?


The rapid growth model only makes sense for people looking for investors and the promise to snag a customer base once their hooked.
Value has a lot to lose but mostly margins to gain.
Listen I’m for keeping them pushing towards ethical contributions to the ecosystem, but I also entirely understand them not doing so just for charities sake alone.
Fair on the release part lol. I didn’t know that, but I guess the ignore part is still an issue since people want them to get it to work with other hardware out of scope, or worse Nvidia


Project scope. It makes more sense for them to make a distro that solves currently unsolved spaces directly related to their market (merging PC with handhelds, consoles, and VR). More scope either means more hours or more spread of the existing hours accros the added work.
They have been contributing alot back to upstream which does help Linux gaming in general alot.


You want them to release SteamOS and ignore all user feedback except for Steam hardware some how? Otherwise that’s all cost. Or significant brand risk.
Tbh I’m not sure what the conversion rate to sales actually would be. The numbers of sold games on the steam machine vs the average machines rates will be a better indicator of that IMHO. The Steamdeck is biased in that showing the form factor support is also an important point for games on the deck.
I would rather them keep investing in the ecosystem then try to rush for growth and have to enshitify to keep it.


Largely people pay for games regardless. From Steams perspective investing the store profits into Linux gaming is a market risk reducer and a cost center for producing viable hardware platforms.
Its not a revenue stream at the moment. If a million more people started running it tommorow on non-steam hardware and didn’t adjust the game buying habits, it would be a net loss for Value, as their support costs would rise with no increase in revenue.
The best case for them is that it acts as a conduit for good PR, and user generated content for the platform (i.e. mods, apps, and of course FOSS merge requests).
Sweet! Great place showing off the projects and companies that are truely working towards respects your freedom type of computing!
I would add Oxide to the server list they are definitely in this space


Not the IP but hope this git link helps: https://gitlab.com/here_forawhile/nanogram.git
Tar ball link if that is better for you: https://gitlab.com/here_forawhile/nanogram/-/archive/main/nanogram-main.tar.gz?ref_type=heads


Was macos at work, now Linux dev machine. Its a big up.
To be honest, all those are web apps now shrug. Zoom, slack, teams, docs, sheets, <insert word named app here>, all open in the browser. So IDC what the OS is for them. Linux Zero-Touch deployments are still in progress IMHO so I get why they arent here yet for a lot offices, but we are closer now than ever (thanks atomic OSs!).


Bazzite and Kinote though I use distrobox and k8s alot for messing with other distros/apps. Vscodium and neovim. Vscodium is loaded up with nearly anything IaC or kubernetes related and Continue for some AI stuff (pointed to local and mistrial). Also heavy opinionated stuff for Python like black, etc (I want my ide to yell at me to make better code). Some GitHub and git lab add-ons too. Nvim is just as is.


Definitely overkill lol. But I like it. Haven’t found a more complete solutions that doesn’t feel like a comp sci dissertation yet.
The goal is pretty simple. Make as much as possible, helm values, k8s manifests, tofu, ansible, cloud init as possible and in that order of preference because as you go up the stack you get more state management for “free”. Stick that in git and test and deploy from that source as much as possible. Everything else is just about getting to there as fast as possible, and keeping the 3-2-1 rule alive and well for it all (3 backups, 2 different media, 1 off-site).


Fleet from Rancher to deploy everything to k8s. Baremetal management with Tinkerbell and Metal3 to management my OS deployments to baremetal in k8s. Harvester is the OS/K8S platform and all of its configs can be delivered in install or as cloudinit k8s objects. Ansible for the switches (as KubeOVN gets better in Harvester default separate hardware might be removed), I’m not brave enough for cross planning that yet. For backups I use velero, and shoot that into the cloud encrypted plus some nodes that I leave offline most of the time except to do backups and updating them. I user hauler manifests and a kube cronjob to grab images, helm charts, rpms, and ISO to local store. I use SOPS to store the secrets I need to boot strap in git. OpenTofu for application configs that are painful in helm. Ansible for everything else.
For total rebuilds I take all of that config and load it into a cloudinit script that I stick on a Rocky or sles iso that, assuming the network is up enough to configure, rebuilds from scratch, then I have a manual step to restore lost data.
That covers everything infra but physical layout in a git repo. Just got a pikvm v4 on order along with a pikvm switch, so hopefully I can get more of the junk on Metal3 for proper power control too and less IPXE shenanigans.
Next steps for me are CI/CD pipelines for deploying a mock version of the lab into Harvester as VMs, running integrations tests, and if it passes merge the staged branch into prod. I do that manually a little already but would really like to automate it. One I do that I start running Renovate to grab the latest stable for stuff for me.


As THE USB-C PD evangelist. I have to say. Fair. Like PD EPD is definitely reaching the limits of the USB-C form factor to me, and data over copper is a dead end at some point too.
Still want ever device I have on it. Though as we scale past the 260 watt range (and I do…) or longer distances (also me) it’s just going to have to be another interface and probably medium for data for the protocol. So far MPO for data and honestly pogo pins for power are the best I’m seeing.
Again for everything thats not a serious power device like well pumps, servers, AC/Heat pump, Power tools, etc or serious data server/client. Its fine, which is seriously impressive.
Rant over I also like the idea of better hardware stats reported to the OS. Its one reason I fell in love with software raid over hardware raid
Git lab CI is my goto for git repo based things (unit tests, integration tests, etc). Fleet through Rancher for real deployments (manages and maintains state because kubernetes). Tekton is my in between catchall.


Are people paying for SteamOS? I thought the only revenue streams around it was the Steam Deck and soon the Steam Machine and the VR thing.
Largely it’s a risk reduction thing for them. Otherwise their dependent on a monopolistic OS and their largely uninterested in collaboration competitor.


Servo is the one I follow in that space


Hopefully work on opensource firmware getting going like https://github.com/koalazak/dorita980. Seems more movement in the server side though https://github.com/ia74/roomba_rest980
Another option is hardware hacking it https://github.com/meech-ward/roomba/tree/main
Tbh the amount of cameras and microphones we have that upload to external unaudited servers (“the cloud”) is insane to me. This is just further worry about it since the scheme allowed their ownership over our devices and privacy, which means they can also sell that.
This is how I use kubernetes (specifically Harvester HCI and some lighter RKE2 nodes). Just one big computer lots of nodes. Still working getting the plumbing fully figured out for virtual desktop to output video devices.
To be fair. SELINUX always seems like THE answer with flexibility it provides with App armor being just SELINUX light…
It would make more sense to me to have better support for leveraging SELINUX primatives to accomplish the same things. I at least, don’t know of any LSM features that can’t be covered user:role:type:security level:catagorey and namespaces?
The issue is always that info is hard to know sometimes and programers can barely stop ourselves from running as root with all files in 777 mode let alone conceptualize those other attributes for files and services