• 7 Posts
  • 209 Comments
Joined 2 years ago
cake
Cake day: August 10th, 2023

help-circle


  • There are a few apps that I think fit this use case really well.

    Languagetool is a spelling and grammer checker that has a server client model. Libreoffice now has built in languagetool integration, where it can acess a server of your choosing. I make it access the server I run locally, since archlinux packages languagetool.

    Another is stirling-pdf. This is a really good pdf manipulation program that people like, that comes as a server with a web interface.




  • I’ve seen three cases where the docker socket gets exposed to the container (perhaps there are more but I haven’t seen any?):

    1. Watchtower, which does auto updates and/or notifies people

    2. Nextcloud AIO, which uses a management container that controls the docker socket to deploy the rest of the stuff nextcloud wants.

    3. Traefik, which reads the docker socket to automatically reverse proxy services.

    Nextcloud does the AIO, because Nextcloud is a complex service, but it grows to be very complex if you want more features or performance. The AIO handles deploying all the tertiary services for you, but something like this is how you would do it yourself: https://github.com/pimylifeup/compose/blob/main/nextcloud/signed/compose.yaml . Also, that example docker compose does not include other services, like collabara office, which is the google docs/sheets/slides alternative, a web based office.

    Compare this to the kubernetes deployment, which yes, may look intimidating at first. But actually, many of the complexities that the docker deploy of nextcloud has are automated away. Enabling the Collabara office is just collabara.enabled: true in the configuration of it. Tertiary services like Redis or the database, are included in the Kubernetes package as well. Instead of configuring the containers itself, it lets you configure the database parameters via yaml, and other nice things.

    For case 3, Kubernetes has a feature called an “Ingress”, which is essentially a standardized configuration for a reverse proxy that you can either separate out, or one is provided as part of the packages. For example, the nextcloud kubernetes package I linked above, has a way to handle ingresses in the config.

    Kubernetes handles these things pretty well, and it’s part of why I switched. I do auto upgrade, but I only auto upgrade my services, within the supported stable release, which is compatible for auto upgrades and won’t break anything. This enables me to get automatic security updates for a period of time, before having to do a manual and potentially breaking upgrade.

    TLDR: You are asking questions that Kubernetes has answers to.





  • I’ll say it again and again. The problem is neither Linus, nor Kent, but the lack of resources for independent developers to do the kind of testing that is expected of the big corporations.

    Like, one of the issues that Linus yelled at Kent about was that bcachefs would fail on big endian machines. You could spend your limited time and energy setting up an emulator of the powerPC architecture, or you could buy it at pretty absurd prices — I checked ebay, and it was $2000 for 8 GB of ram…

    But the big corpos are different. They have these massive CI/CD systems, which automatically build and test Linux on every architecture under the sun. Then they have an extra, internal review process for these patches. And then they push.

    But Linux isn’t like that for independent developers. What they do, is just compile the software on their own machine, boot into the kernel, and if it works it works. This is how some of the Asahi developers would do it, where they would just boot into their new kernel on their macs, and it’s how I’m assuming Overstreet is doing it. Maybe there is some minimal testing involved.

    So Overstreet gets confused when he’s yelled at for not having tested on big endian architectures, because where is he supposed to get a big endian machine that he can afford that can actually compile the linux kernel in less than 10 years? And even if you do buy or emulate a big endian CPU, then you’ll just get hit with “yeah your patch has issues on machines with 2 terabytes or more of ram” and yeah.

    One option is to drop standards. The Asahi developers were allowed to just merge code without being subjected to the scrutiny that Overstreet has been subjected to. This was in part due to having stuff in rust, and under the rust subsystem — they had a lot more control over the parts of Linux they could merge too. The other was being specific to macbooks. No point testing the mac book-specific patches on non-mac CPU’s.

    But a better option, is to make the testing resources that these corporations use, available to everybody. I think the Linux foundation should spin up a CI/CD service, so people like Kent Overstreet can test their patches on architectures and setups they don’t have at home, and get it reviewed before it is dumped to the mailing list — exactly like what happens at the corporations who contribute to the Linux kernel.






  • Many helm charts, like authentik or forgejo integrate bitnami helmcharts for their databases. So that’s why this is concerning to me,

    But, I was planning to switch to operators like cloudnativepostgres for my databases instead and disable the builtin bitnami images. When using the builtin bitnami images, automatic migration between major releases is not supported, you have to do it yourself manually and that dissapointed me.


  • Does that happen often? I had, apparently incorrectly, assumed those things were more or less fire and forget.

    Bootloaders are also software affected by vulnerabilities (CVE’s). But this comment did make me curious. Do the CVE’s that affect grub, would a person of threat model/usecase 1 in my comment above care about them?

    Many of them do indeed seem to non issues. From the list here.

    Grub CVE’s requiring the config file to be malicious, like this one are pretty much non issues. The config file is encrypted, in my setup at least (but again, not the default. Also idk if the config file is signed/verified).

    I think this one is somewhat concerning. USB devices plugged in could corrupt grub.

    Someone could possibly do something similar with hard drives, replacing the one in the system. The big theoretical vulnerability I am worried about is someone crafting a partition in such a way that it does RCE through Grub. Or maybe’s it’s already happened, my research isn’t that deep. But with such a vulnerability, someone could shrink the EFI partition and then put another partition there, that grub reads, and then the code execution exploit happens.

    But honestly, if someone could replace/modify hard drives, or add/remove USB devices, what if they just replace your entire system motherboard with a malicious one? This is very difficult to defend against, but you could check for it happening by having your motherboard be password protected, and you always log into your motherboard whenever you boot to make sure the password is the same. (Although perhaps someone could copy over the hashes (at least I am assuming the passwords are hashed) from one motherboard to another).

    But if something like that is in your threat model, it’s important to note that ethernet, and many other firmware is proprietary (meaning you cannot audit or modify the code), and also has what’s called “DMA” — direct memory access. It can read and write to the Linux kernel with permissions higher than root. So if I have access to your device, I could replace your wifi card with a malicious one that modifies stuff after you boot or does any other things.

    What you are supposed to do is prevent tampering in the first place, or for a much cheaper cost, have “tamper evident protection”, things that inform you if the system was tampered with. Stickers over the screws are an easy and cheap example…

    But DefCon has a village dedicated to breaking tamper evident protection. Lol.

    I think if your adversary is a nationstate, secure boot usecase 1 is simply broken and doesn’t work. It’s too easy to replace any of the physical components with malicious one’s for them, because there is no verification of those. I think Secure Boot usecase 1 is for protecting against corporate espionage in mid to high tier corpos. Corporations also tend to give people devices, and they can ensure that those devices have tamper evidence/tamper resistance on top of secure boot. Of course I think a nationstate can get through them, but I don’t think it’s included in the threat model.

    Nationstates can easily break the system of secure boot, and probably have methods in addition to or separate from secure boot for protecting themselves.

    Wait what, that just seems like home directory encryption with extra steps 🤦 I guess I’ll go back to Veracrypt then.

    Performance on LUKS might be better since LUKS is a first class citizen. But maybe performance with veracrypt is better since only the home directory is encrypted. I tried duckduckgo but the top results were AI slop with no benchmarks so I’m not gonna bother doing further research.


  • I’m on my phone rn and can’t write a longer post. This comment is to remind me to write an essay later. I’ve been using authentik heavily for my cybersecurity club and have a LOT of thoughts about it.

    The tldr about authentik’s risk of enshittification is that authentik follows a pattern I call “supportware”. It’s when extremely (intentionally/accidentally) complex software (intentionally/accidentally) lacks edge cases in their docs,because you are supposed to pay for support.

    I think this is a sustainable business model, and I think keycloak has some similar patterns (and other Red Hat software).

    The tldr about authentik itself is that it has a lot of features, but not all of them are relevant to your usecase, or worth the complexity. I picked up authentik for invites (which afaik are rare, also official docs about setting up invites were wrong, see supportware), but invites may not something you care about.

    Anyway. Longer essay/rant later. Despite my problems, I still think authentik is the best for my usecase (cybersecurity club), and other options I’ve looked at like zitadel (seems to be more developer focused),or ldap + sso service (no invites afaik) are less than the best option.

    Sidenote: Microsoft entra is offers similar features to what I want from authentik, but I wanted to self host everything.


  • but wouldn’t only the bootloader need to be signed

    So the bootloader also gets updated, and new versions of the bootloader need to get signed. So if the BIOS is responsible for signing the bootloader, then how does the operating system update the bootloader?

    To my understanding a tamper-proof system already assumes full disk-encryption anyway

    Kinda. The problem here, IMO, is that Secure boot conflates two usecases/threat models into one:

    1. I am a laptop owner who wants to prevent tampering with the software on my system by someone with physical access to my device
    2. I am a server operator who wants to enforce usage of only signed drivers and kernels. This locks down modification/insertion of drivers and kernels as a method of obtaining a rootkit on my servers.

    The second person does not use full disk encryption, or care about physical security at all, really (because they physically lock up the server racks).

    What happens in this setup is that the bootloader checks the kernel’s signature, and the kernel checks the driver’s signature… and they enable this feature depending on whether or not the secure boot EFI motherboard variable is enabled. So this feature isn’t actually tied to the motherboard’s ability to verify the bootloader. For example, grub has it’s own signature verification that can be enabled seperately.

    The first person does not have malware in their system in their threat model. So they can enable full disk encryption, and then they don’t care about the kernel and drivers being signed.

    EXCEPT THEY ACTUALLY DO BECAUSE NOBODY DOES THE SETUP WHERE THE KERNELS AND DRIVERS ARE ENCRYPTED BY DEFAULT.

    You must explicitly ask for this setup from the Linux distro installers (at least, all the one’s I’ve used). By default, /boot, where the kernel and drivers are stored, is stored unencrypted in another external partition, and not in the LUKS encrypted partition.

    What I do, is I have /boot/efi be the external EFI partiion. /boot/efi is where the bootloader is installed, and the kernels are stored in /boot, which is located on my encrypted BTRFS partition. The grub bootloader is the only unencrypted part of my system, like the setup you suggested. But I had to ask for this by changing the partitioning scheme on CachyOS, and on other distros I used before this one.

    Very interestingly about this setup, is that grub cannot see the config it needs to boot. It guesses at which disk it should decrypt, and if I have a usb drive plugged in, it guesses wrong and my system won’t boot.

    Continuing, the problem with setups like this is that in order to verify the bootloader, you must have secure boot enabled. Grub will then read this EFI configuration, and attempt to verify the kernels and drivers. As far as I can tell, there is no way to disable this other than changing the source code or binary patching grub.

    I have a blog post where I explored this: https://moonpiedumplings.github.io/playground/arch-secureboot/index.html

    So this means that even in setups where everything is encrypted except grub, you still have to sign the kernels and drivers in order to have a bootable system (unless you patch grub).

    I eventually decided that this wasn’t worth it, and gave up on secure boot for now.



  • So Signal does not have reproducible builds, which are very concerning securitywise. I talk about it in this comment: https://programming.dev/post/33557941/18030327 . The TLDR is that no reproducible builds = impossible to detect if you are getting an unmodified version of the client.

    Centralized servers compound these security issues and make it worse. If the client is vulnerable to some form of replacement attack, then they could use a much more subtle, difficult to detect backdoor, like a weaker crypto implementation, which leaks meta/userdata.

    With decentralized/federated services, if a client is using other servers other than the “main” one, you either have to compromise both the client and the server, or compromise the client in a very obvious way that causes the client to send extra data to server’s it shouldn’t be sending data too.

    A big part of the problem comes with what Github calls “bugdoors”. These are “accidental” bugs that are backdoors. With a centralized service, it becomes much easier to introduce “bugdoors” because all the data routes through one service, which could then silently take advantage of this bug on their own servers.

    This is my concern with Signal being centralized. But mostly I’d say don’t worry about it, threat model and all that.

    I’m just gonna @ everybody who was in the conversation. I posted this top level for visibility.

    @[email protected] @[email protected] @[email protected] @[email protected] @[email protected]

    EDIT: elsewhere in the thread it is talked about what is probably a nation state wiretapping attempt on an XMPP service: https://www.devever.net/~hl/xmpp-incident

    For a similar threat model, signal is simply not adequate for reasons I mentioned above, and that’s probably what poqVoq was referring to when he mentioned how it was discussed here.

    The only timestamps shared are when they signed up and when they last connected. This is well established by court documents that Signal themselves share publicly.

    This of course, assumes I trust the courts. But if I am seeking maximum privacy/security, I should not have to do that.