

Having used it, it is. Immich is awesome.
Having used it, it is. Immich is awesome.
Same. I didn’t use RoR before it was cool.
(Honestly I just didn’t like the syntax of Ruby and there were tons of great alternatives already by the time I was looking)
This sounds to me like we need to move to Germany. It’s not uncommon for people in the US to apply to hundreds, or even thousands, of jobs and get a single-digit number of interviews (or offers, in industries where interviewing is uncommon) out of it, regardless of effort put into the application. Most applications are rejected before a human ever reads them.
The medium (lol) is annoying, but it didn’t ask me to pay. Is the article not free for you?
The article goes into depth about what you should be using. Floats and doubles are not designed for use with base 10 fractions. They’re good at estimating them, but not accurate enough for real financial use.
There’s also not much reason to reinvent the wheel for an already solved problem. Many languages have this data type already built into the language, and the rest usually have it available through a package.
Browsers have supported the javascript:
scheme for a long time, so I guess it just abuses that.
I agree, Zig is awesome. But the author missed the entire point of the borrow checker. It exists to make you a better programmer, not to just annoy you. The author immediately then showcased why the borrow checker exists in their example of why it’s annoying lol.
In Zig, we would just allocate the list with an allocator, store pointers into it for the tag index, and mutate freely when we need to add or remove notes. No lifetimes, no extra wrappers, no compiler gymnastics, that’s a lot more straightforward.
What happens to the pointers into the list when the list needs to reallocate its backing buffer when an “add” exceeds its capacity?
Rust’s borrow checker isn’t usually just a “Rust-ism”. It’s all low level languages, and many times also higher level languages. Zig doesn’t let you ignore what Rust is protecting against, it just checks it differently and puts more responsibility on the developer.
But this case is bigger than JavaScript. It’s about whether trademark law works as written, or whether billion-dollar corporations can ignore the rule that trademarks cannot be generic or abandoned. “JavaScript” is obviously both. If Oracle wins anyway, it undermines the integrity of the whole system.
If the law costs $200k to enforce, then the law already doesn’t work as written.
Anyway, good luck Deno! We’re all hoping you win this.
Storing UI assets in a database is unusual because assets aren’t data, they are part of your UI. This is of course assuming a website - an application may choose to save assets in a local sqlite database or similar for convenience.
It’s the same reason I wouldn’t store static images in a database though - there’s no reason to do so. Databases provide no additional value over just storing the images next to the code, and same with localizations.
User-generated content changes things because that data is now dynamically generated, not static assets for a frontend.
I know I probably sound like an ass but it really is that bad
Nah I work in shitty codebases on a regular basis, and the less I need to touch them, the happier I am.
With regards to other localization changes, it’s not important to localize everything perfectly, but it’s good to be aware of what you can improve and what might cause some users to be less comfortable with the interface. That way you’re informed and can properly justify a sacrifice (like “it’d cost us a lot of time to support RTL interfaces but only 0.1% of users would use them”) rather than be surprised that there even is one being made.
Also, user-generated content explains why these are in a DB, and now it makes a lot more sense to me. User-generated translations used as-is makes more sense than trying to force Project Fluent (or other similar tools) into it.
Localization is a hard problem, but storing your translations in the DB is a bit unusual unless you’re trying to translate user data or something.
I’d recommend looking into tools like Project Fluent or similar that are designed around translating.
As for the schema you have, if you’re sticking with it, I would change the language into an IETF language tag or similar instead. The important part is that it separates language variants. For example, US English and British (or international) English have differences, Brazilian Portuguese and Portugal Portuguese have differences, Mexican Spanish and Spain Spanish have differences, etc.
Using an ID instead of the text content itself as part of the PK should be a no-brainer. Languages evolve over time, and translations change. PKs should not. Your choice of PK = (TextContentId, Language) is the most reasonable to me, though I still think that translations should live as assets to your application instead to better integrate with existing localization tools.
One last thing: people tend to believe that translating is enough to localize. It is not. For example, RTL languages often swap the entire UI direction to RTL, not just the text direction. Also, different cultures sometimes use different colors and icons than each other.
it is the same code as you produce manually.
LLMs do not create the same code that I would, nor do they produce code at the same level that I would. Additionally, LLMs are not deterministic (normally - there are ways to manually seed some but it’s rare). Determinism has a very specific meaning. Compilers supporting reproducible builds are deterministic. LLMs producing a different output each time are not.
it is a task of a programmer to review it before publishing it.
Tell that to my coworkers. It’s honestly insulting the code I have to review and contribute to. Having used these tools myself, I’m better off writing the code myself.
Currently everything on the Internet is assumed to be free.
This isn’t true at all. Content on websites is protected by copyright laws as well.
It’s not open source? The repository doesn’t seem to include the Rust source code.
Being closed source, I have no reason to believe it isn’t malicious. Open source means auditable. Closed source means “trust me bro”.
Edit: nevermind, I think I found the right repo. The package links to another one. Might want to fix that. Also, your profile on GH says it’s still closed source.
Mentioned this to the other commenter, but this doesn’t use the type system to enforce the mutual exclusivity constraint. In Rust, the main way to do that via the type system is through enums.
This doesn’t represent the mutual exclusivity through the type system (which is what the article is all about).
I love clap and I use it a lot, but the only way to represent the exclusivity through the type system in Rust is through an enum.
I like the concept, and it’s great in TS. Unfortunately, not as doable in other languages.
I’m a bit curious if it’s possible to extend clap
to do this in Rust though (specifically mutually-exclusive arg groups).
Next.js is a highly opinionated framework. “Our way or the highway” is what should be expected going in. Good luck if your requirements change later on, and I hope your code is transferrable to a new framework if needed.
Unfortunately, I have never need to follow “our way” because my projects are more complex than whatever basic blog setup they document. I always end up just building my own stack around Vite. I’m also not much of a fan of fighting against my tools when what I need isn’t something the tool devs already thought of.
I got a simple approach to comments: do whatever makes the most sense to you and your team and anyone else who is expected to read or maintain the code.
All these hard rules around comments, where they should live, whether they should exist, etc. exist only to be broken by edge cases. Personally I agree with this post in the given example, but eventually an edge case will come up when this no longer works well.
I think far too many people focus on comments, especially related to Clean Code. At the end of the day, what I want to see is:
Whether you use comments at all, where you place them, whether they are full sentences, fragments, lowercase, sentence case, etc makes no difference to me as long as I know what the code does when I see it (assuming sufficient domain knowledge).