

Only 40%? Would have thought it would be much higher. Don’t more projects generally fail then that without being in a bubble?
Only 40%? Would have thought it would be much higher. Don’t more projects generally fail then that without being in a bubble?
The attack is known as the evil maid attack. It requires repeated access to the device. Basically if you can compromise the bootloader you can inject a keylogger to sniff out the encryption key the next time someone unlocks the device. This is what secure boot is meant to help protect against (though I believe that has also been compromised as well).
But realistically very few people need to worry about that type of attack. Encryption is good enough for most people. And if you don’t have your system encrypted then it does not matter what bootloader you use as anyone can boot any live usb to read your data.
Well, that is the first option they suggest:
Option 1: Give Linux Mint a try
There is not really one best distro out there - or else there would only be one distro. But for someone new you will find basically any mainstream/popular distro good enough for your usecase. The best one for you will come down to personal preference and will likely - at least at the start - be centered on which desktop environment you like the most. KDE will probably feel more like Windows. Though gnome I think tends to be the default on most distros. You will find popular distros have multiple flavors with various desktop environments as well. Your best bet is to download a few and put them on a usb and try them out before installing. That will give you a better idea of what you want.Or just pick one and go for it if you don’t care that much - it will probably be good enough.
Wasn’t this the fork created by the guy that got banned from X development because they were causing a large amount of churn that kept introducing breaking changes and regressions?
It does not matter if the battery is plugged in or not. Far more important is the state of the battery. All LiPo batteries degrade over time. But they can degrade faster or slower depending on the state they are stored in. They degrade faster when at higher charge levels or when stored in hotter environments or if they go through more charge/discharge cycles. Older battery technology also degraded faster in general, new ones tend to last longer in sub-optimal conditions.
Apart from newer battery technology itself battery monitoring and charging technology has also improved. A lot of modern laptops have smarter charging circuitry that lets them stop charging before the battery is at 100%, sometimes configurable in the bios, sometimes controllable via the OS. This can help a lot to preserve the battery life for longer, especially if you leave it plugged in as it spends less time at 100% charge. Older devices also tended to run hotter for longer periods of time, even when idle. Both of these combined with worst battery technology would lead to batteries degrading quite a lot faster if you left them plugged in all the time - hence where the advice came from (note that removing the battery at 100% charge was also not great for it, better to store lipo batteries at 40-60% charge, but it did still save it from the heat of the device) . But when setup correctly modern devices suffer from this a lot less so it is much less important to remove the battery at all - I doubt you would really notice the difference overall on modern systems.
will charge the battery and then start running directly from the wall-power once the battery is full. They bypass the charging once it is indicated to have a “full charge”.
That does not make sense. Batteries cannot be charged and discharged at the same time - they are either charging or discharging or neither. When a device is in use while it is plugged in the device is being run directly from wall power - and anything left if sent to charge the battery. The only devices that don’t do that is ones that power off while the charger is plugged in - which does not include any laptop that I have ever seen, generally just smaller devices.
Modern laptops have smarter controllers that can turn off charging before the battery is full or when other conditions are met. But none are able to draw power from the battery while the battery is being charged - that just does not make any sense.
Huh? If it can be used while it is charging - which is all laptops since forever - then it will run off the adapter while plugged in. Regardless of the battery state. You cannot charge a battery and discharge it at the same time - if it is charging then power must be coming from anything other then the battery. Epically with LiPo batteries which you cannot continue charging after they are full - doing so will cause them to burst into flames. So all LiPo charging circuits will cut off power to the cells once they reach a desired voltage - weather that is considered 100% (aka once it reaches 4.2V) or at a configurable lower amount.
You don’t need anywhere near 50% market share to be a valid alternative. If anything market share has nothing to do with it being a valid alternative except that it more likely to be the case with higher numbers. Past 50% it is really no longer even the alternative at all - it would be the main choice.
Every OS has paper cuts. You learn to live with them over time as you have no other choice. When you switch OS it cuts in different ways and they feel fresher then the old ones you had gotten used to over time. It does not matter if you switch from Windows to Linux, Linux to Windows or to or from MacOS. They all have papercuts.
Instead, it’s about the irretrievable, sunken costs associated with a loss of incompatible software and hardware that the person would no longer be able to use after switching to Linux.
… When windows has made its latest release incompatible with most existing hardware out there because of some arbitrary requirements. I have not seen any major hardware compatibility issues with Linux in quite a few years now. It is not common at all for some hardware to not work. In less then about a year Windows in going to make a huge amount of existing hardware unusable for supported versions of windows. That alone will help with Linuxs market share.
Most arguments in this article are overblown out very outdated. Software compatibility is a issue, but much less then it used to be. Big companies like Adobe and Microsoft which refuse to support Linux are also starting to alienate their user base making the cost of switching more and more apprising all the while the linux friendly alternatives are growing in popularity. And as I said above hardware is not a big issue these days and about to be a big issue for Windows systems.
It does touch berfily on the main point sa to why linux os not very popular ATM:
Most people don’t even know what Linux is because they’ve never seen it pre-installed on a laptop in a store. But I digress.
That is the problem, defaults. Most people don’t care or want to change their OS and most people have hardware and workloads that are easily compatible to Linux. It is really only a minority of people that require things that Windows supports better - sadly those are also the types of people more willing to break from the default OS.
The year of the Linux desktop won’t come until we, the Linux community, find a way to balance the cost of switching with the future benefits of daily driving Linux from the perspective of an average user. Until then, Linux will remain more like a niche thing, made by enthusiasts for enthusiasts.
No it wont. The normal user will only switch when they are forced to by their current system stopping working or new hardware comes with Linux by default. The average user is your aunt how uses their computer to log into facebook or look up recipes online. A professional that requires adobe suite is not an average user and only makes up a tiny fraction of the overall userbase. It would be nice to support their workloads, but even if adobe was fully supported on Linux that would still only be a fraction more users that would be willing to move. For the average user it is the defaults that their system comes with that makes the biggest difference.
Fairly sure that Matt Mullenweg has already completely undermined Wordpress.org’s trust and reliability.
From what I can tell xbps-src are just the source packages to the main repos in Void. That is not what AUR is. We have access to the main repo sources in Arch just like Void. The main thing about AUR is anyone can contribute without any gated approvals. That is the big difference between the main source repos of either distro and AUR. Unless I have misunderstood what xbps is.
but looking at templates they can actually understand its kinda simple script and get the idea of how it works
Same exact idea with PKGBUILDs. No benefit to Void here. The way Void does things will not change people looking at or understanding the packages they install. You have the same optitunities on both systems for looking at the source of packages. So that argument for Void is void :)
Also void has runit so this mean u have to get more simple programs to run system like seatd dbus and etc.
Not really a good argument either. Systemd and runit are different but that doesn’t make runit better in terms of learning anything. If you want to learn how most Linux systems boot and operate you need to learn systemd as that is what the vast majority of distros use. Learning runit instead only means you are learning a niche way of booting a tiny fraction of systems.
Neither of these arguments are a very strong case for Void over arch.
XBPS-SRC does not look like an alternative to AUR at all. It looks like Voids alternative to https://gitlab.archlinux.org/archlinux/packaging/packages - where Arch maintains all its packages. Nor is comparing the number of packages in AUR to Void main repos a good idea - Arch has its own main repos that are a better equivalent. The Void templates do not look dissimilar form what a PKGBUILD file is either and you can do the same things with writing your own PKGBUILD or pulling them from repos if you really want to. I don’t see how void is any better then Arch in anything you have described here. IMO it just looks like it does more of the same things with a bit difference in syntax/commands you run. Nothing you have said here is really a solid argument for using Void or Arch at all.
The AUR is not even that great. I think most people seem to get confused between what is in the AUR and the main packages since they just use tools like yay that install from both. But most people only use a couple of packages from the AUR - it is the package selection in the main repos which is what is so nice about Arch. The AUR is just nice for more niche things that have not made it into the main repos yet.
I hope u don’t use AUR blindly and just do yay -S something without looking what pkgbuild is doing, it might be dangerous not knowing what program can do and what script that is downloading it too right?
Same goes for Void? Most people wont read the source of third party packages they install. No matter what distro they are on. AUR tooling does try to help with this but most people ignore it. Same will go for Void. It is not a distro problem - just a humans are lazy problem. Plus even if people did read them there is only a small subset of people that actually understand them enough to spot obviously malicious packages - though that can spot hidden malicious packages are vastly smaller.
252 of that 592 used memory is buffers/cache, not application memory. That is used by the kernel for kernel buffers and the filesystem cache - IE files read by something at some point. The kernel keeps them in memory in case they are needed again to speed up file reads. You can effectively ignore these vales as they will always grow to fill your ram and will be evicted when programs require memory and there is not enough free.
These tools are not lieing to you, just telling you something other then what you are reading into them. Tracking and reporting on what is using memory is a complex topic and here used is just what is physically allocate. It doesn’t mean much over all as it always tends to be full of your system has been running for a decent amount of time. Available is typically the more useful one to look at as it is an estimate about how much the kernel can reclaim now if an application request it without needing to swap things out.
Can you share the output of free? There are multiple values to read from that.
I never thought shaking the bed would cause adhesion issues 🤔 always thought it was far more the head crashing/clipping or scraping the surface of the part while printing.
Restrictive tech never works when you apply it from the start. You need to capture the market first before you can start to apply that. And that is the road Bamboo labs looks to be heading down. It is the classic playbook:
One step closer to DRM filament spools. Just like the overpriced ink cartridges of 2d printers. The safety and security arguments are always bullshit. This is only about control over what you can do. No other printer has ever had an issue with safety or security with vastly more open designs.
There is in this case, and why Linus did accept the patch in the end. Previous cases less so though which is why Linus is so pissed at this one.
The reason for this new feature is to help fix data loss on users systems - which is a fine line between a bug and a new feature really. There is precedent for this type on thing in RC releases from other filesystems as well. So the issue in this instance is a lot less black and white.
That doesn’t excuse previous behaviour though.