• 3 Posts
  • 109 Comments
Joined 9 months ago
cake
Cake day: December 16th, 2024

help-circle


  • That’s interesting I hadn’t thought about the JSON angle! Do you mean that you can actually use jq on regular command outputs like ls -l?

    No, you need to be using a tool which has json output as an option. These are becoming more common, but I think still rare among the GNU coreutils. ls output especially is unparseable, as in, there are tons of resources telling people not to do it because it’s pretty much guaranteed to break.


  • I’ve been using fish (with starship for prompt) for like a year I think, after having had a self-built zsh setup for … I don’t know how long.

    I’m capable of using awk but in a very simple way; I generally prefer being able to use jq. IMO both awk and perl are sort of remnants of the age before JSON became the standard text-based structured data format. We used to have to write a lot of dinky little regex-based parsers in Perl to extract data. These days we likely get JSON and can operate on actual data structures.

    I tried nu very briefly but I’m just too used to POSIX-ish shells to bother switching to another model. For scripting I’ll use #!/bin/bash with set -eou pipefail but very quickly switch to Python if it looks like it’s going to have any sort of serious logic.

    My impression is that there’s likely more of us that’d like a less wibbly-wobbly, better shell language for scripting purposes, but that efforts into designing such a language very quickly goes in the direction of nu and oil and whatnot.


  • Isn’t that just nitpicking?

    No, because the definitions are phrased very differently. Software doesn’t have to be copyleft to be considered FOSS either, as is the case with tons of BSD and MIT and whatnot code that’s used in proprietary programs—all they have to do is make it clear that they’re using their software (and even that’s not a given).

    Even with copyleft licenses like the GPL, as long as they never distribute their software to anyone they don’t have to offer them the source code either, as with so many backends. The AGPL gives consumers of distributed systems some more rights.

    Free software is mostly about providing you rights when you encounter the source code, meaning that you’re allowed to modify it and share it. This is as opposed to stuff like “source available” licenses that permit you to read the source code, but not modify or share it.


  • Such a license would neither be regarded as free software nor open source.

    Some other alternative could be making GPL-3.0-or-later + a Contributor License Agreement a more common option, so that it is possible to tell companies that if they want to use the library in some closed-source application, they need to work out a license deal.

    CLAs are frequently involved in turning software proprietary though, so it isn’t exactly held in the highest esteem in the FOSS community.

    And without a CLA you essentially get the Linux kernel situation, which will be stuck on GPL2 forever, since they can’t reasonably get everyone to agree to switch to GPL3, especially since some copyright holders are not just unwilling, but unreachable or dead (and in several jurisdictions copyright lasts for decades after death).

    Personally I suspect public funding, similar to science, education and libraries, is a more likely option, though that’ll be an uphill political struggle a lot of places.


  • Yep. I wonder if that CRA compliance stuff won’t change that. Industries with strict demands on safety should be putting in work and resources to ensure that those demands are actually met, but how the CRA deals with FOSS took a bit of work to not be a complete disaster, and I can’t imagine it’s easy for FOSS projects to work out the details there.

    As in:

    1. The automotive industry absolutely should be CRA compliant,
    2. it’d be nice for everyone if cURL was known to be CRA compliant,
    3. compliance doesn’t appear by magic, someone has to put in work,
    4. companies that should be CRA compliant should help with that work.

    In the case where they don’t want to pitch in, well, something cURL-equivalent but known CRA-compliant won’t just fall off the back of a wagon, which means the companies that need compliance have a problem.

    Then again, apparently the HPE Nonstop ecosystem has git available on their platform all through the spare-time efforts of all of one dude, which absolutely shows that critical systems are willing to rely on precarious software, so I’m not gonna hold my breath.






  • Well, bash should show up quickly enough. But yeah.

    I’m also no longer much of a bash guy. Back when I was my scripts were a lot simpler, and broke in weird ways a lot more. And every time I picked up a new defensive habit, my bash became a little bit uglier, and I thought to myself “maybe I should just do this in Python”.

    But this script would be a lot longer in Python.



  • For those who want to give it a go:

    #!/bin/bash
    set -euo pipefail
    
    while read -rd ":" path
    do
      for bin in "$path"/*
      do
        # don't error out if there's no manpage
        set +e
        man "$(basename "$bin")"
        set -e
      done
    done < <(printf '%s%s' "$PATH" ":")
    

    when you get sick of it, hit ^Z (ctrl-z) and go kill %1. Then you get to start all over from the start next time!

    Bonus points for starting a tracker so you can count how long it takes to go from “eugh, what’s with that overwrought and excessively defensive bash script” to “fuck, now I’m doing it too”


  • Humans also frequently need to try a wrong approach first to get the idea of a better approach, no matter if we’re rested or not. Which is why it’s important to be able to throw away prototypes rather than push an “it seemed like a good idea at the time” to prod.

    But having a good sleep, walk in a park, shower, etc lets us think better than if we’re just banging our heads in the same corner all day long. Breaks are important. General health, too.






  • We could probably stand to have some organisation standards in repo roots, but I tend to agree that dotfiles aren’t the way to go there. The project root is similar to ~/.config and the like: When you’re there you should not be subjected to further hidden levels. Those config files are a significant part of the project.

    State files however, like all the stuff in .git, lockfiles and the like are generally¹ fine to hide away. Those are side effects of running other tools, not ordinary editable configuration. Same goes for cache—and both cache and runtime files should likely go in the ordinary XDG dirs rather than be something every project has to set up a gitignore for.

    If anything I’m more frustrated with the C projects that just plop every source file in the root directory.

    ¹ Just don’t make it too easy to sneak unexpected crap in there. We don’t need to make the next Jia Tan’s job easier.


  • I’ve very barely dipped my toes in dbus before, and the option to have something else is on its face attractive (not a fan of XML and the late 90s/early aughties style of oop), but JSON for a system interface?

    I mean, Kubernetes shows that yaml can work, but in this day and age I’d expect several options for serialisation, and for the default to be binary, not strings.

    String serialisations are primarily for humans IMO, either as readers or writers. As writers we want something with comments (and preferably no “find the missing }” game), so for that most of us would prefer something like TOML if the data is simple enough, and actually Yaml for complexity at the level of Kubernetes—JSON manages to be even more of a PITA at that level.

    But machine-to-machine? Protobuf, cap’n’proto, postcard, even CBOR should all be alternatives to examine