

Yeah, at this point I’d say it’s safe to assume that


Yeah, at this point I’d say it’s safe to assume that


Why can I only see Yanderedev in the picture? 😭


I think so too, but I wanted a confirmation. I tried that website as well, but with it I could only get error messages if I also provide --fail to error on HTTP error codes


I couldn’t find an endpoint to test this with, picking out a random inexistent domain I could just see that without parameters I get the errors, with silent I get none, with show-error I get the same output as no parameters, with both I get the same output as no parameters again.
This is as far as I can get without reading into the source code, I already searched on the internet with no luck because all others posts I could find assume it is paired with another parameter


So in your experience it doesn’t do something more if I don’t provide --silent?


How are they even hoping to keep it alive by hosting it on GitHub again?
I’m just speechless at these devs now, there’s a million lessons to learn from and they keep repeating the same mistakes over and over again like they feel somehow untouchable? They are on the right side legally, but that never stopped Microsoft from not giving a crap what constitutes an actual DMCA violation, they will always err on the side of caution


The core functionality behind git last-modified was written by GitHub over many years (originally called blame-tree in GitHub’s fork of Git), and is what has powered our tree-level blame since 2012. Earlier this year, we shared those patches with engineers at GitLab, who tidied up years of development into a reviewable series of patches which landed in this release.
The free software ethos at its finest, this is enough to make a grown man cry 🥹
That’s the joke XD
I always shiver at the thought that no PHP formatter (that I know of) can wrap lines that are too long… and my codebase has too many 🥲
Misfortune for 500


Does the shirt become purple if I click it?


I can’t recall which right now, but there are ones that manage to scrape the entire content by spoofing the Google crawler.
Since websites want to maximise their SEO, they must provide the raw content to be indexed better
I know what kind of pfp they put on, I JUST CAN’T PROVE IT


What line from article 3 makes you think that? It sounds to me like it’s only talking about data processors inside and outside the EU that handle data of people in the EU


Agreed, I think the author’s feeling towards this is commendable in spirit, but to let a generic phrase be forever attached to a political movement in any setting is a bit much, even if it’s infamously memorable, it doesn’t belong to Nazis.
Still, it’s just a name change, so, aside from a few lines of code to change, it doesn’t badly affect anyone. All power to the author
Haha, I mean if you can get a megasquirt you’re obviously doing something very right, maybe too right
Oh it’s pretty much the same thing but IMO actually a little worse, so if you didn’t have luck with distrobox it’s either a limitation or a misconfiguration somewhere
Edit: what the heck is a MegaSquirt with a dedicated forum on msextra.com?? Lmao
How did you install it?
BTW, for these kinds of things toolbox is usually a good choice
Yes, it’s usually more efficient for more niche topics.
Luckily many things still come up with searches, be it stackoverflow or other kinds of forums, the real problem are the SEO spam articles and AI generated stuff (and worse still when they coexist), so it’s becoming harder to discern what is worthwhile and what isn’t. When all else fails I also always try to find my answers by playing around myself