• 1 Post
  • 219 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • Interesting question, I’d imagine that one major limit would be the number of cores your CPU has available. Once you got to more VMs than cores, I’d guess things would quickly grind to a halt?

    But I wonder if you could even anywhere near to that point as on searching only L2 VM is mentioned on various sites and that is with warnings of severe performance limitations and for development testing only. While L3 might work the problems may get too bad you can’t practically go beyond that level?



  • The key is getting out at the right time, and that is weighed massively against small investors. The big investors and institions control the market and can move quickly while small investors cannot.

    Tesla is not doing well - look at its falling sales. It’s a risky stock to hold. The AI companies are also highly risky stocks to hold.

    That doesn’t mean don’t hold them - all anyone is saying really is that these are high risk investments, and at some point they are going to probably crash because it’s a bubble.

    That doesn’t necessarily mean “don’t invest”. It does certainly mean be prepared to get out fast and also only use money you can afford to lose when investing with such high risk stocks.


  • Xwayland is an X11 server that runs under Wayland. It acts as a compatibility layer so that programs that are native X11 programs that don’t support Wayland can still be run. The system largely determines when to use Xwayland; it’s not generally something the user does.

    Wine, and it’s derivative for gaming Proton are normally run as X11 applications. There is a Wayland driver for wine - it’s not perfect, and not widely used by default yet, but it does generally work. Still, at the moment usually wine and Proton are by default running with X11 (and so xwayland) even on Wayland.

    When there are issues under Wayland it’s generally to do with Nvidia drivers rather than xwayland or x11. But the wine Wayland.drv can still give a performance boost. It can also causes its own issues so is best used on a case by case basis.

    Some people do set wine or proton to use the Wayland drivers when using Wayland; for example with Steam and Proton-GE just add PROTON_ENABLE_WAYLAND=1 into the games launch options and it’ll run directly under Wayland.

    It may be a way for OP to switch to Wayland and see if they can get good gaming performance in wine/Proton. However it’s more likely the Nvidia drivers are the cause of the performance issues under Wayland.


  • So to be clear Wayland + Pantheon resolves the issue but with a performance hit in games? X11 + Pantheon has the issue but with otherwise good performance in games?

    Presumably all the system packages and software is up to date. If not then do a update.

    Starting with the basics, what kind of mouse and keyboard do you have? If they are wireless, how are they connecting to your device? Note some manufacturers don’t provide drivers for Linux so it’s worth seeing if your brand and device is supported (E.g. Logitech and it’s wireless dongle; you need Solaar to get basic support and switching to Bluetooth may be best if available).

    If they are wired, have you tried plugging the keyboard and mouse into different ports on your PC? If they’re on the same USB Bus that could be contributing to the problem. Although that really shouldn’t be an issue at all with modern devicss, it might be enough to get round whatever the issue is if one of your devices is conflicting.

    Presumably if there is a power profile in Pantheon, and if so you’ve set it to performance, not something like balanced? This ensures that everything is running at full capacity. There are rare polling issues with wine and some USB devices, and one solution is ensuring the CPU is in performance rather balanced power settings to prevent a bottleneck.

    If the issue is still persisting then I would next install a different Desktop Environment to see if the issue persists. This may help check if it’s an issue with Pantheon itself. Install a lightweight desktop environment such as XFCE. Try out XFCE in both X11 and Wayland, and see if the problem persists. If it doesn’t then submit a bug report to the Pantheon makers and consider switching to another DE (such as KDE or Gnome). If not then it at least helps seemingly exclude pantheon as the cause. It can be messy installing another DE and difficult to remove cleanly so one method is create a backup of your system using Timeshift, and then restore it after trying XFCE to bring your system back to its original state.

    If the issue persists you could also try switching to an older version of the Nvidia drivers or the open Noevaeu drivers. If it resolves it that helps narrow the problem although if it doesn’t resolve it you still can’t entirely exclude the graphics drivers as the cause.

    I think assuming all the basics make no difference, the most telling test will be if switching from Pantheon to another DE helps - in X11 and Wayland. Pantheon is relatively niche and DEs are also important factors in themselves when it comes to performance and also bugs. If it persists beyond a DE test, then I’d be most suspicious it’s a driver issue either with the devices themselves or the graphics drivers.

    Not much more I can think of beyond that at the moment sorry.


  • It’s about short term vs long term costs, and AWS has priced itself to make it cheaper short term but a bit more expensive long term.

    Companies are more focused on the short term - even if something like AWS is more expensive long term, if it saves money in the short term that money can be used for something else.

    Also many companies don’t have the money upfront to build out their own infrastructure quickly in the short term, but can afford longer term gradual costs. The hope would be even though it’s more expensive, they reach a scale faster where they make bigger profits and it was worth the extra expense to AWS.

    This is how a lot of outsourcing works. And it’s exacerbated by many companies being very short term and stock price focused. Companies could invest in their own infrastructure for long term gain, but they often favour short term profit boosts and cost reduction to boost their share price or pay out to share holders.

    Companies frequently so things not in their long term interests for this reason. For example, companies that own their own land and buildings sell them off and rent them back. Short term it gives them a financial boost, long term it’s a permanent cost and loss of assets.

    In Signals case it’s less of a choice; it’s funded by donations and just doesn’t have the money to build out it’s own data centre network. Donations will support ongoing gradual and scaling costs, but it’s unlikely they’d ever get a huge tranch of cash to be able to build data centres world wide. They should still be using multiple providers and they should also look to buildup some Infrastructure of their own for resilience and lower long term costs.


  • It does make sense for Signal as this is a free app that does not make money from advertising. It makes money from donations.

    So every single message, every single user, is a cost without any ongoing revenue to pay for it. You’re right about the long run but you’d need the cash up front to build out that infrastructure in the short term.

    AWS is cheap in the sense that instead of an initial outlay for hardware, you largely only pay for actual use and can scale up and down easily as a result. The cost per user is probably going to be higher than if you were to completely self host long term, but that does then mean finding many millions to build and maintain data centres all around the world. Not attractive for an organisation living hand to mouth.

    However what does not make sense is being so reliant on AWS. Using other providers to add more resilience to the network would make sense.

    Unfortunately this comes back to the real issue - AWS is an example of a big tech company trying to dominate a market with cheap services now for a potential benefits of a long term monopoly and raised prices in the future. They have 30% market share and already an outage by Amazon is highly disruptive. Even at 30% we’re at the point of end users feeling locked in.



  • So in terms of hardware, I use a Raspberry Pi 5 to host my server stack, including Jellyfin with 4k content. I have a nvme module with a 500gb stick and an external HDD with 4tb of space via USB. The pi5 is headless and accessed directly via SSH or RDC.

    The Raspberry Pi 5 has H.265 hardware decoding and if you’re serving 1 video at a time to any 1 client you shouldn’t have any issues, including up to 4k. It will of course use resources to transcode if the client can’t support that content directly but the experience should be smooth for 1 user.

    For more clients it will depend on how much heavy lifting the clients do. I my case I have a mini PC plugged into my TV, I stream content from my pi5 to the mini PC and the mini PC is doing the heavy lifting in terms of decoding. The hardware on the pi5 is not; it just transfer the video and the client does the hard work. If all your clients are capable then such a set up would work with the pi5.

    An issue would come if you wanted to stream your content to multiple devices at the same time and the clients don’t directly support H.265 content. In that case, the pi5 would have to transcode the content to another format bit by but as it streams it to the client. It’d cope with 1 user for sure but I don’t know how many simultanous clients it could support at 1440p.

    The other consideration is what other tools are being use on the sever at the same time. Again for me I live alone so I’m generally the only user of my pi5 servers services. Many services are low powered but I do find things like importing a stack of PDFs into Paperless NGX is surprisingly CPU intense and in that case the device could struggle if also expected to transcode content.

    I think from what you describe the pi5 could work but you may also want to look at higher powered mini PC as your budget would allow that.

    For reference I use dietpi as the distro on my server, and I use a mix of dietpi packages (which are very well made for easy install and configuration) and docker. I am using quite a few docker stacks now due to the convenience of deploying. Dietpi is debian based, and has a focus on providing pre configured packages to make set up easy, but it is still a full debian system and anything can be deployed on it.

    Obviously the other consideration in the pi5 is an ARM device and a mini PC would be X86_64. But so far I’ve not found any tools or software I’ve wanted that aren’t compiled and available for the Pi5 either via dietpi or docker; ARM devices are popular in this realm. I have come across a bug in docker on ARM devices which broke my VPN set up - that was very frustrating and I had to downgrade docker a few months ago while awaiting the fix. That may be worth noting given docker is very important in this realm and most servers globally are still x86.

    If I were in your position and I had $200 I’d buy the maximum CPU and GPU capability I could in 1 device, so I’d actually lean to a mini PC. If you want to save money then the Pi5 is reasonabkr value but you’d need to include a case and may want to consider a nvme or ssd companion board. Those costs add up and the value of the mini PC may compare better as an all in one device; particularly if you can get a good one second hand. There are also other SBC that may offer even better value or more power than a pi5.

    Also bear in mind for me I have a mini PC and pi5; they do different things with the pi5 is the server but the mini PC is a versatile device and I play games on it for example. If you will only have 1 server device and pre exisiting smart tvs etc you’ll be more reliant on the servers capabilities so again may want to opt for the most powerful device you can afford at your price point.


  • One thing that the article misses is that there sre different types of users.

    There are plenty of users who are basically computer illiterate and only care that things “just work”. Those users happily cede control to the OS because they dont know how to manage it themselves, don’t want to learn and don’t want to be taken advantage of by malware. That is perfectly legitimate and is probably the majority of end users.

    Then you have the tech savvy users who want to push their devices to do more, whether it is taking full advantage of their hardware or optising their system or trying out different software. Those users want and need access to their hardware. They’re a minority but still a large and substantial group in a global scale and always will be.

    As the article says, the problem is that the tech companies are using protecting and serving the first group as an opportunity to take control and lock all users out of their own hardware. It comes down to bad regulation by govwrnments, which is also driven by extreme ignorance by politicians and the legal system.

    Apple is nothing short of an abusive monopoly in its own market. It sells it as a strength but the EU at least has made some moves to break their monopoly. Apple keeps playing the security card while in reality it’s about protecting the golden goose - 30% cut of everything the user does with their device is insane.

    Google dreams of being the same; every step in this direction will be sold as being in the interests of security but in reality it’s with an eye on the control and money Apple derives from iOS without having to lift a finger.

    I’d never buy an apple device due to the apple tax. The apple tax affects all consumers though as companies price a service on apple to pay their 30%, then often price the same on Android and take the profit. We’re all being screwed by these digital monopolies.

    In some ways I hope Google does lock down Android, as this will increase the user base of more tech savy people who would then actively support a 3rd OS like an android fork or a pure Linux phone. I’m personally very interested in a Linux phone now for example. Perviously it was a curio I wanted to try, now thanks to google’s actions it’s feeling like some thing essential that I need to jump to as soon as it’s feasible.

    My worry is that for many people banking and payments apps are essential and may prevent them switching as they’re currently very locked in to IOS or Android ecosystems. I might have 2 devices though, 1 liberated device as a daily driver plus back to using my physical cards and a 2nd android device to access banking apps (something I already really do from home anyway so don’t need on the go; it’ll ironically probably be more secure that way!).




  • Open Office? It hasn’t been touched in a decade. LibreOffice is the true continuation of Open Office, which was forked off after Oracle bought Sun and OO had been left with poor governance and slow updates.

    Open Office finally ended up under the Apache foundation but hasn’t been maintained since 2014.

    LibreOffice has had continual development with both bug fixes and new features, and the Open Document Foundation gives it good governance and independence as an open source project…

    Honestly, switch to Libre Office.


  • Having experienced instability I’d say that is a pretty good reason. It’s one of those things that don’t matter until it happens to you, and I think everyone assumes won’t happen to them.

    Having said that it can be managed. It’s infuriating when your OS just stops working, but if you have good backups and can roll back the system quickly it’s fine.

    Rolling releases are great for having the latest versions of software, but it’s also like constantly being a beta tester. And the distros approach to rolling release makes a big difference.

    Manjaro does have a small development team compared to other big name rolling releases, so it just isn’t able to do the same level of testing and prep as a better resourced distro like Fedora for example. It does a reasonably good job with a small team but it inevitably makes things more difficult.

    Manjaro is also Arch based but it’s not Arch, and one source of breakages can be using AUR. I think people think of Manjaro as just a more convenient version of Arch but Manjaro is it’s own distro, and using the packages in the AUR can break things. People seem to forget that Arch is bleeding edge while Manjaro does hold packages back for testing, so the two distros are not in sync.

    If Manjaro is used as Manjaro and not treated as arch-light then it’s a fine distro. But it’s somewhat pushed as an easier to use version of Arch, so then inexperienced users in particular can get into trouble trying to use things like the AUR. But Manjaro itself is generally fine.

    I personally don’t recommend Manjaro to people. That’s because for me there are better rolling release distros which are better resourced (such as OpenSuSE or Fedora), better options for systems stable systems, or if users want Arch then Arch itself is the way to go. Manjaro is absolutely fine but I wouldn’t say it’s the best option in any category, including Arch based distros.


  • I’ve always loved the Denver Airport conspiracy theory - that it is actually the secret headquarters of the illuminati or other organisations. It was $2bn over budget and has tunnels under it, which have led people to claim there are also secret bunkers under the airport. It also has a few bizarre pieces of art within it.

    I think it’s just an airport with ugly aesthetic choices but I love that people think of all the places a global secret society would base itself, they’d pick Denver Airport.

    Apparently there are 6 underground levels at the airport - but they’re used to run an airport. And the tunnels were for a failed automated baggage transport system. The art is just art.

    https://allthatsinteresting.com/denver-airport-conspiracy


  • OK first, do you need both iGPU and the Nvidia GPU to be working at the same time? Your desktop environment will expect to work with one card at a time so plugging screens into both while you’re doing setup tasks could cause problems. Linux can certainly work with 2 GPUs but I wouldn’t have both active with displays plugged in from the start when installing a fresh system.

    It’s telling that you can’t even get the official Nvidia drivers working. You then “fix” things by removing the cable from the iGPU after having all these problems and installing the Nouveau drives. I think this could be the source of your problems.

    I think there is a real risk your whole system is misconfigued due to both cards being “active” while you set things up.

    While it’s probably fixable within Linux, personally I’d go back to basics - plug the displays only into your Nvidia card, and do a fresh install of Tumbleweed, and see where you’re at.

    You could even go further and disable the iGPU in your bios and then install tumbleweed. However I would be reluctant to do that as if your Nvidia card is broken it can be difficult to undo without a working display. Alternatively you could also use the iGPU for the install (nothing plugged into the Nvidia card), do the install and then switch over and set up the Nvidia within Linux.

    Regardless, I’d say trying to get DX11 working with the Nvidia now on Open drivers seems to be just pushing the problem further and further down the line. You’ll have more problems long term and never fix the root problem. The first issue is getting the official Nvidia drivers actually working, rather than working around that problem.



  • I think they were just pointing out that this is the problem with subscription services. You own nothing and you’re screwed when the service goes down.

    It really doesn’t take “ludicrous amounts of time and money” to build a private library. It’s interesting how the subscription giants have managed to change people’s perceptions - when you buy content to keep, you keep some of the value, but when you subscribe you’re just getting a time pass to use someone else’s library and won’t see that money again.

    They sold the proposition on convenience when everything was in one place, but now it’s all fragmented it’s a waste of money.

    And of course plenty of people are building media libraries for free by sailing the seas.


  • Quite a bad compromise of Xubuntu’s and Canonical’s security and also embarrassing.

    They’re being a bit vague and dismissive of the hack at the moment, as far as I can see there is now only the 24.04 version linked on the downloads page (not even sure the download link works). The recent 25.10 release (released 10th Oct) is no longer visible and the blog posts visible talk about testing for 21.04 (posts from 2021).

    So presumably they’ve reverted to an archived version of their site while they investigate?


  • I have played with Arch in a VM - I learnt a lot about how Linux works setting it up. But the tutorials and guides are good, and you end up with a lean system with just what you want in it, and pretty much all configured directly by you.

    I can see why Arch is a popular distro and base for other distros (like Manjero and currently rapidly growing CachyOS).

    But I’m not at the point I’d want to main it. My issue is the concern that because everything is set up by me, it’s a much more unique system so if something breaks it could be a whole myriad of my own choices that are the cause. I’m nervous about having to problem solve things when they break and solutions not working because of how my particular system is configured. It’s probably a bit irrational but I do quite like being on an distro that lots of other people have the exact same configuration as me, so when things break there is lots of generic help out there.

    That said I would consider arch based distros like Manjaro or CachyOS as they are in that vain of mostly standardised distro.