

Captchas are getting out of hand.


Captchas are getting out of hand.


Don’t tell that to the kids in Finland 😉
https://santaclausvillage.info/activities/santa-claus-main-post-office/


I don’t understand why you’re getting downvoted. While I don’t share your conviction, I do admit it’s certainly a possibility.
The advantage of doing things that way is that code becomes much more portable. We may finally reach the goal of “write once, run anywhere”, because the AI may write all the platform specific code.
It does make a big assumption that the AI output is reliable enough though. At times people will want to tweak the output, so how are they gonna go about that? Maybe if the language is based on Markdown, you can inject snippets of code where necessary. But if you have to do that too often, such a language will lose its appeal.
There’s a lot of unknowns, but I see why it’s a tempting idea.


You know, as a full-time Linux user, I think I rather have game developers continue to create Windows executables.
Unlike most software, games have a tendency to be released, then supported for one or two years, and then abandoned. But meanwhile, operating systems and libraries move on.
If you have a native Linux build of a game from 10 years ago, good luck trying to run it on your modern system. With Windows builds, using Wine or Proton, you actually have better chances running games from 10 or even 20 years ago.
Meanwhile, thanks to Valve’s efforts, Windows builds have incentive to target Vulkan, they’re getting tested on Linux. That’s what we should focus on IMO, because those things make games better supported on Linux. Which platform the binary is compiled for is an implementation detail… and Win32 is actually the more stable target.


tsc is (very) slow and there are also no convenient ways to interact with it from Rust.
So it saves a lot development and CI time to roll our own. The downside is that our inference still isn’t as good as tsc of course, but we’re hopeful the community can help us get very close at least.


Heh, I agree with everything you said, but I’m afraid such a framework is impossible to create, let alone implement. It’s impossible to foresee the infinite possibilities for people to screw themselves through bad decisions, so all you’d create is a lot of bureaucracy to still end up in the same place.


It’s that the compiler doesn’t help you with preventing race conditions. This makes some problems so hard to solve in C that C programmers simply stay away from attempting it, because they fear the complexity involved.
It’s a variation of the same theme: Maybe a C programmer could do it too, given infinite time and skill. But in practice it’s often not feasible.


Which one should I pick then, that is both as fast as the std solutions in the other languages and as reusable for arbitrary use cases?
Because it sounds like your initial pick made you loose the machine efficiency argument and you can’t have it both ways.


I’m not saying you can’t, but it’s a lot more work to use such solutions, to say nothing about their quality compared to std solutions in other languages.
And it’s also just one example. If we bring multi-threading into it, we’re opening another can of worms where C doesn’t particularly shine.


Well, let’s be real: many C programs don’t want to rely on Glib, and licensing (as the other reply mentioned) is only one reason. Glib is not exactly known for high performance, and is significantly slower than the alternatives supported by the other languages I mentioned.


I would argue that because C is so hard to program in, even the claim to machine efficiency is arguable. Yes, if you have infinite time for implementation, then C is among the most efficient, but then the same applies to C++, Rust and Zig too, because with infinite time any artificial hurdle can be cleared by the programmer.
In practice however, programmers have limited time. That means they need to use the tools of the language to save themselves time. Languages with higher levels of abstraction make it easier, not harder, to reach high performance, assuming the abstractions don’t provide too much overhead. C++, Rust and Zig all apply in this domain.
An example is the situation where you need a hash map or B-Tree map to implement efficient lookups. The languages with higher abstraction give you reusable, high performance options. The C programmer will need to either roll his own, which may not be an option if time Is limited, or choose a lower-performance alternative.
Of course, but it needn’t be black and white. You can also diversify, make yourself less reliant on a single platform. And by doing so, enable your audience to follow you elsewhere. Or diversify into different activities altogether. And when it’s no longer half your income on the line, then switch.
But doing nothing and saying, “but half my income!”? That’s not only a choice, but also complacency.
Great points, except:
People can’t leave for anything smaller.
They can and some do. It’s still a choice.


I’m not arguing against that. Merely providing some counterweight to the idea that the author was “flinging shit in the trenches” 😅


I found the title of that section slightly triggering too, but the argument they lay down actually makes sense. Consistency helps you to achieve correctness in large codebases, because it means you don’t have to reinvent what is correct over and over in separate pockets of the codebase. Such pockets also make incremental improvements to the codebase harder and harder, so they do come back to bite you.
Your example of vendors doesn’t relate to that, because you don’t control your vendor’s code. But you do control your organisation’s.


There is a serious attempt for that actually: https://www.assemblyscript.org/
It doesn’t offer full compatibility with the regular TypeScript though, despite being very similar.


But he did step in, albeit privately. I actually agree an earlier public statement would have helped, but we don’t know the specifics of what went on behind the scenes.
In any case, I don’t think it’s fair to assign blame for Marcan’s burnout to Linus, as the post above did. Marcan himself mentioned personal reasons too when he announced his departure. I think we should show understanding and patience with both sides, and assigning blame isn’t helping with that.


That now involves fixing Rust drivers, so you’re going to need to know Rust.
I also don’t think the latter follows from the former. You can continue to not know Rust as long as you’re willing to work with those that can. Problems only start if you’re unwilling to collaborate.
I dunno, I have a Framework laptop and had a keyboard issue with it. It still worked, but one of the keys didn’t register well. So they sent me a new keyboard and I sent them back the old one after I’d swapped it. Not a single day was I without my laptop, which sounds quite unlikely compared to other laptop brands and the support you get (or not) with those. No buyer’s remorse here.