• 0 Posts
  • 100 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle
  • NASA funded SpaceX based on hitting milestones on their COTS program. Those were just as available to Boeing and Blue Origin, but they had less success meeting those milestones and making a profit under fixed price contracts (as opposed to the traditional cost plus contracts). It’s still NASA-defined standards, only with an offloading of the risk and uncertainty onto the private contractors, which was great for SpaceX and terrible for Boeing.

    But ultimately it’s still just contracting.


  • NASA has always been dependent on commercial for profit entities as contractors. The Space Shuttle was developed by Rockwell International (which was later acquired by Boeing). The Apollo Program relied heavily on Boeing, Douglas Aircraft (which later merged into McDonnell Douglas, and then merged with Boeing), and North American Aviation (which later became Rockwell and was acquired by Boeing), and IBM. Lots of cutting edge stuff in that era happened from government contracts throwing money at private corporations.

    That’s the whole military industrial complex Eisenhower was talking about.

    The only difference with today is that space companies have other customers to choose from, not just NASA (or the Air Force/Space Force).




  • Physics don’t change fundamentally between 6 meters and 120 meters

    Yes it does. Mass to strength ratio of structural components changes with scale. So does the thrust to mass ratio of a rocket and its fuel. So does heat dissipation (affected by ratio of surface area to mass).

    And I don’t know shit about fluid dynamics, but I’m skeptical that things scale cleanly, either.

    Scaling upward will encounter challenges not apparent at small sizes. That goes for everything from engineering bridges to buildings to cars to boats to aircraft to spacecraft.





  • It’s a chain of trust, you have to trust the whole chain.

    Including the entire other side of the conversation. E2EE in a group chat still exposes the group chat if one participant shares their own key (or the chats themselves) with something insecure. Obviously any participant can copy and paste things, archive/log/screenshot things. It can all be automated, too.

    Take, for example, iMessage. We have pretty good confidence that Apple can’t read your chats when you have configured it correctly: E2EE, no iCloud archiving of the chats, no backups of the keys. But do you trust that the other side of the conversation has done the exact same thing correctly?

    Or take for example the stupid case of senior American military officials accidentally adding a prominent journalist to their war plans signal chat. It’s not a technical failure of signal’s encryption, but a mistake by one of the participants inviting the wrong person, who then published the chat to the world.





  • They’re actually only about 48% accurate, meaning that they’re more often wrong than right and you are 2% more likely to guess the right answer.

    Wait what are the Bayesian priors? Are we assuming that the baseline is 50% true and 50% false? And what is its error rate in false positives versus false negatives? Because all these matter for determining after the fact how much probability to assign the test being right or wrong.

    Put another way, imagine a stupid device that just says “true” literally every time. If I hook that device up to a person who never lies, then that machine is 100% accurate! If I hook that same device to a person who only lies 5% of the time, it’s still 95% accurate.

    So what do you mean by 48% accurate? That’s not enough information to do anything with.


  • Yeah, from what I remember of what Web 2.0 was, it was services that could be interactive in the browser window, without loading a whole new page each time the user submitted information through HTTP POST. “Ajax” was a hot buzzword among web/tech companies.

    Flickr was mind blowing in that you could edit photo captions and titles without navigating away from the page. Gmail could refresh the inbox without reloading the sidebar. Google maps was impressive in that you could drag the map around and zoom within the window, while it fetched the graphical elements necessary on demand.

    Or maybe web 2.0 included the ability to implement states in the stateless HTTP protocol. You could log into a page and it would only show you the new/unread items for you personally, rather than showing literally every visitor the exact same thing for the exact same URL.

    Social networking became possible with Web 2.0 technologies, but I wouldn’t define Web 2.0 as inherently social. User interactions with a service was the core, and whether the service connected user to user through that service’s design was kinda beside the point.




  • taking a shot at installing a new OS

    To be clear, I had been on Ubuntu for about 4 years by then, having switched when 6.06 LTS had come out. And several years before that, I had previously installed Windows Me, XP beta, and the first official XP release on a home-built, my first computer that was actually mine, using student loan money paid out because my degree program required all students have their own computer.

    But freedom to tinker on software was by no means the flexibility to acquire spare hardware. Computers were really expensive in the 90’s and still pretty expensive in the 2000’s. Especially laptops, in a time when color LCD technology was still pretty new.

    That’s why I assumed you were a different age from me, either old enough to have been tinkering with computers long enough to have spare parts, or young enough to still live with middle class parents who had computers and Internet at home.


  • That’s never really been true. It’s a cat and mouse game.

    If Google actually used its 2015 or 2005 algorithms as written, but on a 2025 index of webpages, that ranking system would be dogshit because the spammers have already figured out how to crowd out the actual quality pages with their own manipulated results.

    Tricking the 2015 engine using 2025 SEO techniques is easy. The problem is that Google hasn’t actually been on the winning side of properly ranking quality for maybe 5-10 years, and quietly outsourced the search ranking systems to the ranking systems of the big user sites: Pinterest, Quora, Stack Overflow, Reddit, even Twitter to some degree. If there’s a responsive result and it ranks highly on those user voted sites, then it’s probably a good result. And they got away with switching to that methodology just long enough for each of those services to drown in their own SEO spam techniques, so that those services are all much worse than they were in 2015. And now indexing search based on those sites is no longer a good search result.

    There’s no turning backwards. We need to adopt new rankings for the new reality, not try to turn back to when we were able to get good results.


  • I can’t tell if you were rich, or just not the right age to appreciate that it wasn’t exactly common for a young adult, fresh out of college, to have spare computers laying around (much less the budget to spare on getting a $300-500 secondary device for browsing the internet). If I upgraded computers, I sold the old one used if it was working, or for parts of it wasn’t. I definitely wasn’t packing up secondary computers to bring with me when I moved cities for a new job.

    Yes, I had access to a work computer at the office, but it would’ve been weird to try to bring in my own computer to try to work on it after hours, while trying to use the Internet from my cubicle for personal stuff.

    I could’ve asked a roommate to borrow their computer or to look stuff up for me, but that, like going to the office or a library to use that internet, would’ve been a lot more friction than I was willing to put up with, for a side project at home.

    And so it’s not that I think it’s weird to have a secondary internet-connected device before 2010. It’s that I think it’s weird to not understand that not everyone else did.