• 0 Posts
  • 232 Comments
Joined 2 years ago
cake
Cake day: July 31st, 2023

help-circle

  • There’s a whole lot of entitlement going on in that thread.

    If the maintainers didn’t want to merge it because they had bigger issues to worry about, that’s that. Whining about it and trying to pressure them with prospects of “becoming obsolete [if you don’t merge this]” isn’t going to make a convincing argument.

    They should either shut the fuck up and learn to RTFM, or maybe consider putting their money where their mouths are by actually paying to support the projects they seem to so desperately think they have a right to influence the direction of.


  • As a developer as well, I agree that they can get fucked. Bloated crap that wastes bandwidth and ruins first-time-to-paint on mobile devices by necessitating downloading and initializing a multi-megabyte bundle of npm packages.

    As a user of the internet, I need websites to work, however. I would have disabled JavaScript entirely by now if it weren’t for the fact that doing so renders what feels like half of the entire web unusable.


  • Might be that there’s some way of blocking that behavior if you don’t like it, though, if I’m not seeing it.

    Not without either breaking most SPAs (Single-Page Applications) or writing userscripts with site-specific logic.

    The classic way of doing this crap was to make a placeholder page navigate to the article page. That leaves the redirect page in the history stack so when the user presses the back button, it just opens the page that navigates them forward again.

    The modern way is to use the history API with history.pushState to add a history entry while listening for the popState event to check if the user pressed the back button. Unfortunately, both of those features have a legitimate use case for enabling navigation within a SPA. Writing an extension to replace them with no-ops would, in the best case, break page history in SPA websites. In the worst case, it would break page routing entirely.

    You might be able to get away with conditionally no-oping their functionality based on heuristics such as “only allow pushState if the user interacted with the page in the last 5 seconds,” but it would still end up breaking some websites.





  • I’d be surprised if it’s not easy to transpile a Markdown document into the format

    By hand—if you have experience writing roff typesetting—it is.

    Having a program do it for you… you’re going to get something, but it won’t be correct and you will need to fix most of it.

    A few problems come to mind:

    1. It’s a macro-based typesetting language. As a consequence, there’s a one-to-many association between representations in Markdown with some equivalent in roff. A Markdown paragraph is just a paragraph, but in roff it could be an un-indented paragraph, a paragraph with first-line indentation, a paragraph with line-wrap indentation, or a paragraph with a left margin.

    2. Rendering a man page, you have multiple different implementations of man and multiple different implementations of *roff (roff, troff, groff, nroff). The set of macros and features that are available differ depending on which implementation, resulting in one-size-fits-all solutions targeting the lowest common denominator.

    3. Ironically, the one-to-many association goes both ways. With Markdown, you have code fences, quotes, italic text, bold text, and tables. With lowest-common-denominator manpage roff, you have paragraphs and emphasis that will either be shown as bold or inverted. If you’re lucky, you might also be able to use underlines. If Markdown tables are no wider than 80 characters, you could preprocess those into plain characters, at least.

    4. Despite being more structured with its typesetting, the contents of a manpage are mostly still unstructured. The individual sections within the page and its use of indentation and emphasis are entirely convention, and not represented in the source code by anything more than just typesetting macro primitives.

    It could work out if you generate both the Markdown and man page from something with more explicit structure. If the plan is to go from a loose Markdown document into a manpage, you’re going to end up having to write your Markdown document almost exactly like a manpage.






  • How’s the weather up there, on your high horse?

    Rust wasn’t meant to be the be-all, end-all solution to safety and soundness; it’s meant to be better than the alternatives, confining potential memory safety issues to explicitly-annotated unsafe blocks.

    But, hey. That’s okay. With that kind of gloating attitude, I’m sure your code is 100% safe and vulnerability free, too. Just remind me to never step foot anywhere near an industrial system or operating system using it.






  • The phoronix comment section is a garden of rationality and level-headed thinking in comparison.

    Any time Rust is brought up in Phoronix, half of the comments are bad-faith idiots making strawmen and whataboutism arguments amounting to “skill issue, C is 300% safe and nobody needs better” and thinly-veiled contrarian antagonism against Rust because it’s popular.

    A comment section worse than that? Impressive.


  • That was something they could actually market to the consumer as a necessary upgrade, though.

    • “Sure, you need a new cable, but component video has cleaner edges and less color bleeding.”
    • “Sure, you need a new cable, but HDMI has better resolution and no fuzziness.”

    Going from HDMI 2.1 to DisplayPort 2.1a doesn’t offer anything other than higher bandwidth, and not even high-end PCs are capable of pushing resolutions at high enough framerates for that bandwidth to have been the limiting factor for games.

    Because of that lack of perceptible benefit to them, the optics of replacing HDMI on consumer devices that are meant to be connected to TVs isn’t going to be good. Even if it’s an objectively better standard from a technical perspective, it will just come across to consumers as an unnecessary change meant to push their TVs towards planned obsolescence.

    They’re going to complain about it, the media will pick up on the story and try to turn it into a scandal, and then legislators and regulators will step in and make decisions based on limited understanding of the technical reasons. By that point, one of the console manufacturers will have been pressured into backing down and promise to keep HDMI in their next-gen console, and the other ones will have followed suit because they don’t want to lose sales over it.

    The only way console manufacturers are going to stay united in kicking HDMI to the curb is if the organization behind HDMI pulls a Unity move and starts charging royalties to the manufacturers for every time a consumer plugs the console into a TV.


  • As long as the manufacturers are competing against each other, that’s never going to happen.

    The “gamer” consumer demographic has some of the most whiny, entitled vocal minorities. They’re going to endlessly complain about the next generation of console needing a special cable/dongle to connect to their TV, one of the manufacturers are going to fold, and then the other one is going to walk back the lack of HDMI because they don’t want to lose sales to their competitor.