• 2 Posts
  • 317 Comments
Joined 2 years ago
cake
Cake day: August 11th, 2023

help-circle



  • The rapid growth model only makes sense for people looking for investors and the promise to snag a customer base once their hooked.

    Value has a lot to lose but mostly margins to gain.

    Listen I’m for keeping them pushing towards ethical contributions to the ecosystem, but I also entirely understand them not doing so just for charities sake alone.

    Fair on the release part lol. I didn’t know that, but I guess the ignore part is still an issue since people want them to get it to work with other hardware out of scope, or worse Nvidia



  • You want them to release SteamOS and ignore all user feedback except for Steam hardware some how? Otherwise that’s all cost. Or significant brand risk.

    Tbh I’m not sure what the conversion rate to sales actually would be. The numbers of sold games on the steam machine vs the average machines rates will be a better indicator of that IMHO. The Steamdeck is biased in that showing the form factor support is also an important point for games on the deck.

    I would rather them keep investing in the ecosystem then try to rush for growth and have to enshitify to keep it.


  • Largely people pay for games regardless. From Steams perspective investing the store profits into Linux gaming is a market risk reducer and a cost center for producing viable hardware platforms.

    Its not a revenue stream at the moment. If a million more people started running it tommorow on non-steam hardware and didn’t adjust the game buying habits, it would be a net loss for Value, as their support costs would rise with no increase in revenue.

    The best case for them is that it acts as a conduit for good PR, and user generated content for the platform (i.e. mods, apps, and of course FOSS merge requests).







  • Definitely overkill lol. But I like it. Haven’t found a more complete solutions that doesn’t feel like a comp sci dissertation yet.

    The goal is pretty simple. Make as much as possible, helm values, k8s manifests, tofu, ansible, cloud init as possible and in that order of preference because as you go up the stack you get more state management for “free”. Stick that in git and test and deploy from that source as much as possible. Everything else is just about getting to there as fast as possible, and keeping the 3-2-1 rule alive and well for it all (3 backups, 2 different media, 1 off-site).


  • Fleet from Rancher to deploy everything to k8s. Baremetal management with Tinkerbell and Metal3 to management my OS deployments to baremetal in k8s. Harvester is the OS/K8S platform and all of its configs can be delivered in install or as cloudinit k8s objects. Ansible for the switches (as KubeOVN gets better in Harvester default separate hardware might be removed), I’m not brave enough for cross planning that yet. For backups I use velero, and shoot that into the cloud encrypted plus some nodes that I leave offline most of the time except to do backups and updating them. I user hauler manifests and a kube cronjob to grab images, helm charts, rpms, and ISO to local store. I use SOPS to store the secrets I need to boot strap in git. OpenTofu for application configs that are painful in helm. Ansible for everything else.

    For total rebuilds I take all of that config and load it into a cloudinit script that I stick on a Rocky or sles iso that, assuming the network is up enough to configure, rebuilds from scratch, then I have a manual step to restore lost data.

    That covers everything infra but physical layout in a git repo. Just got a pikvm v4 on order along with a pikvm switch, so hopefully I can get more of the junk on Metal3 for proper power control too and less IPXE shenanigans.

    Next steps for me are CI/CD pipelines for deploying a mock version of the lab into Harvester as VMs, running integrations tests, and if it passes merge the staged branch into prod. I do that manually a little already but would really like to automate it. One I do that I start running Renovate to grab the latest stable for stuff for me.


  • As THE USB-C PD evangelist. I have to say. Fair. Like PD EPD is definitely reaching the limits of the USB-C form factor to me, and data over copper is a dead end at some point too.

    Still want ever device I have on it. Though as we scale past the 260 watt range (and I do…) or longer distances (also me) it’s just going to have to be another interface and probably medium for data for the protocol. So far MPO for data and honestly pogo pins for power are the best I’m seeing.

    Again for everything thats not a serious power device like well pumps, servers, AC/Heat pump, Power tools, etc or serious data server/client. Its fine, which is seriously impressive.

    Rant over I also like the idea of better hardware stats reported to the OS. Its one reason I fell in love with software raid over hardware raid