• 8 Posts
  • 237 Comments
Joined 2 years ago
cake
Cake day: August 10th, 2023

help-circle












  • Share your lsblk output. It’s likely that your system still leaves the bootloader unencrypted on the disk, even if the kernels and bootloader config are being encrypted (they aren’t encrypted by default on most installs).

    It is theoretically possible to have only one partition that is luks encrypted, but this requires you to store the bootloader in the UEFI, and not all motherboards support this, so distros usually just install it to an unencrypted partition. The UEFI needs to be able to read an unencrypted bootloader from somewhere. That’s why it’s somewhat absurd to claim that the ESP can be encrypted, because it simply can’t.

    From your link:

    One difference is that the kernel and the initrd will be placed in the unencrypted ESP,


  • Project Zero

    Project zero was entirely humans though, no GenAI. Project big sleep has been reliable so far, but there is no real reason for ffmpeg developers to value project big sleeps 6.0 CVE’s over potentially real more critical CVEs. The problem is that Google’s security team would still be breathing down the necks of these developers and demanding fixes for the vulns they submitted, which is kinda BS when they aren’t chipping in at all.

    Anyway there’s a big difference between submitting concrete input data that causes an observable crash, and sending a pile of useless spew from a static analyzer and saying “here, have fun”

    Nah, the actually fake bug reports also often have fake “test cases”. That’s what makes the LLM generated bug reports so difficult to deal with.


  • With a concrete bug report like “using codec xyz and input file f3 10 4d 26 f5 0a a1 7e cd 3a 41 6c 36 66 21 d8… ffmpeg crashes with an oob memory error”, it’s pretty simple to confirm that such a crash happens

    Google’s big sleep was pretty good, it gave a python program that generated an invalid file. It looked plausible, and it was a real issue. The problem is that literally every other generative AI bug report also looks equally as plausible. As I mentioned before, curl is having a similar issue.

    And here’s what the lead maintainer of curl has to say:

    Stenberg said the amount of time it takes project maintainers to triage each AI-assisted vulnerability report made via HackerOne, only for them to be deemed invalid, is tantamount to a DDoS attack on the project.

    So you can claim testing may be simple, but it looks like that isn’t the case. I would say one of the problems is that all these people are volunteers, so they probably have a very, very limited set of time to spend on these projects.

    This was the first search hit about ffmpeg cve’s, from June 2024 so not about the current incident. It lists four CVE’s, three of them memory errors (buffer overflow, use-after-free), and one off-by-one error. The class of errors in the first three is supposedly completely eliminated by Rust.

    FFMpeg is not just C code, but also large portions of handwritten, ultra optimized assembly code (per architecture, too…). You are free to rewrite it in rust if you so desire, but I stated it above and will state it again: ffmpeg made the tradeoff of performance for security. Rust currently isn’t as performant as optimized C code, and I highly doubt that even unsafe rust can beat hand optimized assembly — C can’t, anyways.

    (Google and many big tech companies like ultra performant projects because performance equals power savings equals costs savings at scale. But this means weaker security when it comes to projects like ffmpeg…)


  • This is so sad, because rust is kinda the perfect example of a game where moderators or deputization could handle cheaters. Instead of a matchmaking system, you just join a server and play there. Why not ensure those servers have active moderators to ban cheaters?

    I stopped gaming (for now?), but I’m still really fond of what happened with SCP secret labaratory, which had 20-40 player lobbies. There would almost always be a mod online, and I could get cheaters kicked instantly when by reporting them in the menu, then a mod would spectate them, and then they would get banned.

    Rust seems to have more players per server (a quick search says some of the extra mega ultra large servers go up to 900 people), but it does have a distinct server model, with admins and mods.

    EDIT: the other fun stuff of having active and actually good mods was when they ran fun events. Like I remember they set up a sharks and minnows type game mode instead of the regular stuff. Fun times.


  • AI tools were apparently used for locating the bugs but the reports were real and legit.

    Yes, but the FFMPEG developers do not know this until after they triage all the bug reports they are getting swamped with. If Google really wants a fix for their 6.0 CVE immediately (because again, part of the problem here was google’s security team was breathing down the necks of the maintainers), then google can submit a fix. Until then, fffmpeg devs have to work on figuring out if any more criticial looking issues they receive, are actually critical.

    It’s nuts to suggest continuing to ship something with known vulnerabilities without, at minimum,

    Again, the problem is false positive vulnerabilities. “9.0 CVE’s” (that are potentially real) must be triaged before Google’s 6.0 CVE.

    It would be great if Google could fix it, but ffmpeg is very hard to work in, not just because of the code organization but because of the very specialized knowledge needed to mess around inside a codec. It would be simpler and probably better for Google to contribute development funding since they depend on the software so heavily.

    Except google does fix these issues and contribute funding. Summer of code, bug bounties, and other programs piloted by Google contribute both funding and fixes to these programs. We are mad because Google has paid for more critical issues in the past, but all of a sudden they are demanding free labor for medium severity security issues from swamped volunteers.

    Being able to find bugs (say by fuzzing

    Fuzzing is great! But Google’s Big Sleep project is GenAI based. Fuzzing is in the process, but the inputs and outputs are not significantly distinct from the other GenAI reports that ffmpeg receives.

    Those approaches would be ridiculous bloat, the idea is just supply some kind of wrapper that runs the codec in a chrooted separate process communicating through pipes under ptrace control or however that’s done these days.

    Chroot only works on Linux/Unix and requires root to use, making it not work in rootless environments. Every single sandboxing software comes with some form of tradeoff, and it’s not ffmpeg’s responsibilities to make those decisions for you or your organization.

    Anyway, sandboxing on Linux is basically broken when it comes to high value targets like google. I don’t want to go into detail, but but I would recommend reading maidaden’s insecurities (I mentioned gvisor earlier because gvisor is google’s own solution to flaws in existing linux sandboxing solutions). Another problem is that ffmpeg people care about performance a lot more than security, probably. They made the tradeoff, and if you want to undo the tradeoff, it’s not really their job to make that decision for you. It’s not such a binary, but more like a sliding scale, and “secure enough for google” is not the same as “secure enough for average desktop user”.

    I saw earlier you mentioned google keeping vulnerabilities secret, and using them against people or something like that, but it just doesn’t work that way lmao. Google is such a large and high value organization, that they essentially have to treat every employee as a potential threat, so “keeping vulns internal” doesn’t really work. Trying to keep a vulnerability internal will 100% result in it getting leaked and then used against them.It would be great if Google could fix it, but ffmpeg is very hard to work in, not just because of the code organization but because of the very specialized knowledge needed to mess around inside a codec. It would be simpler and probably better for Google to contribute development funding since they depend on the software so heavily.

    It’s nuts to suggest continuing to ship something with known vulnerabilities without, at minimum, removing it from the default build and labelling it as having known issues. If you don’t have the resources to fix the bug that’s understandable, but own up to it and tell people to be careful with that module.

    You have no fucking clue how modern software development and deployment works. Getting rid of all CVE’s is actually insanely hard, something that only orgs like Google can reasonably do, and even Google regularly falls short. The vast majority of organizations and institutions have given up on elimination of CVE’s from the products they use. “Don’t ship software with vulnerabilities” sounds good in a vacuum, but the reality is that most people simply settle for something secure enough for their risk level. I bet you if you go through any piece of software on your system right now you can find CVE’s in it.

    You don’t need to outrun a hungry bear, you just need to outrun the person next to you Cybersecurity is about risk management, not risk elimination. You can’t afford risk elimination.


  • Yeah. I’m seeing a lot a it in this thread tbh. People are stylizing themselves to be IT admins or cybersec people rather than just hobbyists. Of course, maybe they do do it professionally as well, but I’m seeing an assumption from some people in this thread that its dangerous to self host even if you don’t expose anything, or they are assuming that self hosting implies exposing stuff to the internet.

    Tailscale in to you machine, and then be done with it, and otherwise only have access to it via local network or VPN.

    Now, about actually keeping the services secure, further than just having them on a private subnet and then not really worrying about them. To be explicit, this is referring to fully/partially exposed setups (like VPN access to a significant number of people).

    There are two big problems IMO: Default credentials, and a lack of automatic updates.

    Default credentials are pretty easy to handle. Docker compose yaml files will put the credentials right there. Just read them and change them. It should be noted that you still should be doing this, even if you are using gui based deployment

    This is where docker has really held the community back, in my opinion. It lacks automatic updates. There do exist services like watchtower to automatically update containers, but things like databases or config file schema don’t get migrated to the next version, which means the next version can break things, and there is no guarantee between of stability between two versions.

    This means that most users, after they use the docker-compose method recommended by software, are manually, every required to every so often, log in, and run docker compose pull and up to update. Sometimes they forget. Combine this with shodan/zoomeye (internet connected search engines), you will find plenty of people who forgot, becuase docker punches stuff through firewalls as well.

    GUI’s don’t really make it easy to follow this promise, as well. Docker GUI’s are nice, but now you have users who don’t realize that Docker apps don’t update, but that they probably should be doing that. Same issue with Yunohost (which doesn’t use docker, which I just learned today. Interesting).

    I really like Kubernetes because it lets me, do automatic upgrades (within limits), of services. But this comes at an extreme complexity cost. I have to deploy another software on top of Kubernetes to automatically upgrade the applications. And then another to automatically do some of the database migrations. And no GUI would really free me from this complexity, because you end up having to have such an understanding of the system, that requiring a pretty interface doesn’t really save you.

    Another commenter said:

    20 years ago we were doing what we could manually, and learning the hard way. The tools have improved and by now do most of the heavy lifting for us. And better tools will come along to make things even easier/better. That’s just the way it works.

    And I agree with them, but I think things kinda stalled with Docker, as it’s limitations have created barriers to making things easier further. The tools that try to make things “easier” on top of docker, basically haven’t really done their job, because they haven’t offered auto updates, or reverse proxies, or abstracted away the knowledge required to write YAML files.

    Share your project. Then you’ll hear my thoughts on it. Although without even looking at it, my opinion is that if you have based it on docker, and that you have decided to simply run docker-compose on YAML files under the hood, you’ve kinda already fucked up, because you haven’t actually abstracted away the knowledge needed to use Docker, you’ve just hidden it from the user. But I don’t know what you’re doing.

    You service should have:

    • A lack of static default credentials. The best way is to autogenerate them.
      • You can also force users to set their own, but this is less secure than machine generated imo
    • Auto updates: I don’t think docker-compose is going to be enough.

    Further afterthoughts:

    Simple in implementation is not the same thing as simple in usage. Simple in implementation means easy to troubleshoot as well, as there will be less moving parts when something goes wrong.

    I think operating tech isn’t really that hard, but I think there is a “fear” of technology, where whenever anyone sees a command line, or even just some prompt they haven’t seen before, they panic and throw a fit.

    EDIT and a few thoughts:

    adding further thoughts to my second afterthought, I can provide an example: I installed an adblocker for my mom (ublock origin). It blocked a link shortening site. My mom panicked, calling me over, even though the option to temporarily unblock the site was right there, clear as day.

    I think that GUI projects overestimate the skill of normal users, while underestimating the skill of those who actually use them. I know people who use a GUI for stuff like this because it’s “easier”, but when something under the hood breaks, they are able to go in and fix it in 5 minutes, whereas an actual beginner could spend a two weeks on it with no progress.

    I think a good option is to abstract away configuration with something akin to nix-gui. It’s important to note that this doesn’t actually make things less “complex” or “easier” for users. All the configs, and dials they will have to learn and understand are still there. But for some reason, whenever people see “code” they panic and run away. But when it’s a textbox in a form or a switch they will happily figure everything out. And then when you eventually hit them with the “HAHA you’ve actually been using this tool that you would have otherwise ran away from all along”, they will be chill because they recognize all the dials to be the same, just presented in a different format.

    Another afterthought: If you are hosting something for multiple users, you should make sure their passwords are secure somehow. Either generate and give them passwords/passphrases, or something like Authentik and single sign on where you can enforce strong passwords. Don’t let your users just set any password they want.



  • It might be appropriate for ffmpeg to get rid of such obscure codecs

    This is why compilation flags exist. You can compile software to not include features, and the code is removed, decreasing the attack surface. But it’s not really ffmpegs job to tell you which compilation flags you should pick, that is the responsibility of the people integrating and deploying it into the systems (Google).

    Sandbox them somehow so RCE’s can’t escape from them, even at an efficiency cost

    This is similar to the above. It’s not really ffmpeg’s job to pick a sandboxing software (docker, seccomp, selinux, k8s, borg, gvisor, kata), but instead the responsibility of the people integrating and deploying the software.

    That’s why it’s irritating when these companies whine about stuff that should be handled by the above two practices, asking for immediate fixes via their security programs. Half of our frustration is them asking for volunteers to fix CVE’s with a score less than a 6 promptly (but while simultaneously being willing to contribute fixes or pay for CVE’s with greater scores under their bug bounty programs). This is a very important thing to note. In further comments, you seem to be misunderstanding the relationship Google and ffmpeg have here: Google’s (and other companies’) security program is apply pressure to fix the vulnerabilities promptly. This is not the same thing as “Here’s a bug, fix it at your leisure”. Dealing with this this pressure is tiring and burns maintainers out.

    The other half is when they reveal that their security practices aren’t up to par when they whine about stuff like this and demand immediate fixes. I mean, it says it in the article:

    Thus, as Mark Atwood, an open source policy expert, pointed out on Twitter, he had to keep telling Amazon to not do things that would mess up FFmpeg because, he had to keep explaining to his bosses that “They are not a vendor, there is no NDA, we have no leverage, your VP has refused to help fund them, and they could kill three major product lines tomorrow with an email. So, stop, and listen to me … ”

    Anyway, the CVE being mentioned has been fixed, if you dig into it: https://xcancel.com/FFmpeg/status/1984178359354483058#m

    But it really should have been fixed by Google, since they brought it up. Because there is no real guarantee that volunteers will fix it again in the future, and burnt out volunteers will just quit instead. Libxml decided to just straight up stop doing responsible disclosure because they got tired of people asking for them to fix vulnerabilities with free labor, and put all security issues as bug reports that get fixed when maintainers have the time instead.

    The other problem is that the report was AI generated, and part of the issue here is that ffmpeg (and curl, and a few other projects), have been swamped with false positives. These AI, generate a security report that looks plausible, maybe even have a non working POC. This wastes a ton of volunteer time, because they have to spend a lot of time filtering through these bug reports and figuring out what’s real and what is not.

    So of course, ffmpeg is not really going to prioritize the 6.0 CVE when they are swamped with all of these potentially real “9.0 UlTrA BaD CrItIcAl cVe” and have to figure out if any of them are real first before even doing work on them.