Author, philosopher, programmer, entrepreneur, father and husband.

Philosophy of Balance | Substack | Fiction | Homepage

  • 3 Posts
  • 30 Comments
Joined 2 years ago
cake
Cake day: February 26th, 2024

help-circle

  • I’m not sure I agree with the “no one claimed” part, because I think the proof is specifically targeting the claim that it is more likely than not that we are living in a simulation due to the “ease of scaling” if simulated realities are a thing. Which I think is one of the core premises of simulation theory.

    In any case, I don’t think the reasoning only applied to “full scale” simulations. After all, let’s follow the thought experiment indeed and presume that quantum mechanics is indeed the result of some kind of “lazy evaluation” optimisation within a simulation. Unless you want to argue solipsism in addition to simulation theory, the simulation is still generating perceptions for every single conscious actor within the simulation, and the simulation therefore still needs to implement some kind of “theory of everything” to ensure all perceptions across actors are being generated consistently.

    And ultimately, we still end up with the requirement that there is some kind of “higher order” universe whose existence is fundamentally unknowable and beyond our understanding. Presuming that such a universe exists and manages our universe seems to me to be a masked belief in creationism and therefore God, while trying very hard to avoid such words.

    The irony is that the thought experiment started with “pesky weird behaviours” that we can’t explain. Making the assumption that our “parent universe” is somehow easier to explain is really just wishful thinking that’s as rational as wishing a God to be responsible for it all.

    I’ll be straight here: I’m a deist, I do think that given sufficient thought on these matters, we must ultimately admit there is a deity, a higher power that we cannot understand. We may as well call it God, because even though it’s not a religious idea of God, it is fundamentally beyond our capacity to understand. I just think simulation theory is a bit of a roundabout way to get there as there are easier ways to reach the same conclusion :)


  • It’s possible yes, but the nice thing is that we know we are not merely talking about “advanced people with vastly superior technology” here. The proof implies that technology within our own universe would never be able to simulate our own universe, no matter how advanced or superior.

    So if our universe is a “simulation” at least it wouldn’t be an algorithmic one that fits our understanding. Indeed we still cannot rule out that our universe exists within another, but such a universe would need a higher order reality with truths that are fundamentally beyond our understanding. Sure, you could call it a “simulation” still, but if it doesn’t fit our understanding of a simulation it might as well be called “God” or “spirituality”, because the truth is, we wouldn’t understand a thing of it, and we might as well acknowledge that.





  • I don’t understand why you’re getting downvoted. While I don’t share your conviction, I do admit it’s certainly a possibility.

    The advantage of doing things that way is that code becomes much more portable. We may finally reach the goal of “write once, run anywhere”, because the AI may write all the platform specific code.

    It does make a big assumption that the AI output is reliable enough though. At times people will want to tweak the output, so how are they gonna go about that? Maybe if the language is based on Markdown, you can inject snippets of code where necessary. But if you have to do that too often, such a language will lose its appeal.

    There’s a lot of unknowns, but I see why it’s a tempting idea.


  • You know, as a full-time Linux user, I think I rather have game developers continue to create Windows executables.

    Unlike most software, games have a tendency to be released, then supported for one or two years, and then abandoned. But meanwhile, operating systems and libraries move on.

    If you have a native Linux build of a game from 10 years ago, good luck trying to run it on your modern system. With Windows builds, using Wine or Proton, you actually have better chances running games from 10 or even 20 years ago.

    Meanwhile, thanks to Valve’s efforts, Windows builds have incentive to target Vulkan, they’re getting tested on Linux. That’s what we should focus on IMO, because those things make games better supported on Linux. Which platform the binary is compiled for is an implementation detail… and Win32 is actually the more stable target.









  • I would argue that because C is so hard to program in, even the claim to machine efficiency is arguable. Yes, if you have infinite time for implementation, then C is among the most efficient, but then the same applies to C++, Rust and Zig too, because with infinite time any artificial hurdle can be cleared by the programmer.

    In practice however, programmers have limited time. That means they need to use the tools of the language to save themselves time. Languages with higher levels of abstraction make it easier, not harder, to reach high performance, assuming the abstractions don’t provide too much overhead. C++, Rust and Zig all apply in this domain.

    An example is the situation where you need a hash map or B-Tree map to implement efficient lookups. The languages with higher abstraction give you reusable, high performance options. The C programmer will need to either roll his own, which may not be an option if time Is limited, or choose a lower-performance alternative.


  • Of course, but it needn’t be black and white. You can also diversify, make yourself less reliant on a single platform. And by doing so, enable your audience to follow you elsewhere. Or diversify into different activities altogether. And when it’s no longer half your income on the line, then switch.

    But doing nothing and saying, “but half my income!”? That’s not only a choice, but also complacency.





  • I found the title of that section slightly triggering too, but the argument they lay down actually makes sense. Consistency helps you to achieve correctness in large codebases, because it means you don’t have to reinvent what is correct over and over in separate pockets of the codebase. Such pockets also make incremental improvements to the codebase harder and harder, so they do come back to bite you.

    Your example of vendors doesn’t relate to that, because you don’t control your vendor’s code. But you do control your organisation’s.