• 0 Posts
  • 154 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle
  • Any website using CSR only can’t have a RCE because the code runs on the client. Any code capable of RSC that runs server and client side may be vulnerable.

    From what I’ve seen, the exploit is a special request from a client that functionally lets you exec anything you want (via Function’s constructor). If your server is unpatched and recognizes the request, it may be (likely is) vulnerable.

    I’m sure we’ll get more details over time and tools to manually check if a site is compromised.



  • I think their point was that CSR-only sites would be unaffected, which should be true. Exploiting it on a static site, for example, couldn’t be RCE because the untrusted code is only being executed on the client side (and therefore is not remote).

    Now, most people use, or at least are recommended to use, SSR/RSC these days. Many frameworks make SSR enabled by default. But using raw React with no Next.js, react-router, etc. to create a client-side only site does likely protect you from this vulnerability.


  • 30 is assuming you write code for all 30 days. In practice, it’s closer to 20, so 75 tests per day. It’s doable on some days for sure (if we include parameterized tests), but I don’t strictly write code everyday either.

    Still, I agree with them that you generally want to write a lot of tests, but volume is less important than quality and thoroughness. The author using the volume alone as a meaningful metric is nonsense.


  • TehPers@beehaw.orgtoProgramming@programming.devCloudflare goes again
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    4 days ago

    This is more likely the actual incident report:

    A change made to how Cloudflare’s Web Application Firewall parses requests caused Cloudflare’s network to be unavailable for several minutes this morning. This was not an attack; the change was deployed by our team to help mitigate the industry-wide vulnerability disclosed this week in React Server Components. We will share more information as we have it today.

    Edit: If you like reading


  • 1500 tests is a lot. That doesn’t mean anything if the tests aren’t testing the right thing.

    My experience was that it generates tests for the sake of generating them. Some are good. Many are useless. Without a good understanding of what it’s generating, you have no way of knowing which are good and which are useless.

    It ended up being faster for me to just learn the testing libraries and write my own tests. That way I was sure every test served a purpose and tested the right thing.


  • I am interested to see if these tools can be used to tackle tech debt, as often the argument for not addressing tech debt is a lack of time, or if they would just contribute it to it, even with thorough instructions and guardrails.

    From my experience working with people who use them heavily, they introduce new ways of accumulating tech debt. Those projects usually end up having essays of feature spec docs, prompts, state files (all in prose of course), etc. Those files are anywhere from hundreds to thousands of lines long, and there’s a lot of them. There’s no way anybody is spending hours reading through enough markdown to fill twenty encyclopedia-sized books just to make sure it’s all up-to-date. At least, I can promise that I won’t be doing it, nor will anyone I know (including those using AI this way).


  • And it often generates a bunch of markdown docs which are plain drivel, luckily most devs just delete those before I see them.

    My favorite is when it generates a tree of the files in a directory in a README and a description for each file. How the fuck is this useful? Files will be added and removed, so there’s now an additional task to update these docs whenever that happens. Nobody will remember to do so because no tool is going to enforce that and it’s stupid anyway.

    Sure, document high level directories. But do you really need that all in the top level README?

    But for real if anyone in management is listening, take it from an old asshole who has done this job since the 80s: AI fucking sucks!

    Nothing to add. Just quoting this section because it needs to be highlighted lol.


  • Why not?

    Are you asking the author or people in general? If the author didn’t answer “why not” for you, then I can.

    Yes, I’ve used Claude. Let’s skip that part.

    If you don’t know how to write or identify defensive code, you can’t know if the LLM generated defensive code. So in order for a LLM to be trusted to generate defensive code, it needs to do so 100% of the time, or very close to that.

    You seem to be under the impression that Claude does so, but you presumably can tell if code is written with sufficient guards and tests. You know to ask the LLM to evaluate and revise the code. Someone without experience will not know to ask that.

    Speaking now from my experience, after using Claude for work to write tests, I came out of that project with no additional experience writing tests. I had to do another personal project after that to learn the testing library we used. Had that work project given me sufficient time to actually do the work, I’d have spent some time learning the testing library we used. That was unfortunately not the case.

    The tests Claude generated were too rigid. It didn’t test important functionality of the software. It tested exact inputs/outputs using localized output values, meaning changing localizations was potentially enough to break tests. It tested cases that didn’t need to be tested, like whether certain dependency calls were done in a specific order (those calls were done in parallel anyway). It wrote some good tests, but a lot of additional tests that weren’t needed, and skipped some tests that were needed.

    As a tool to help someone who already knows what they’re doing, it can be useful. It’s not a good tool for people who don’t know what they’re doing.



  • Mixins are composition! They don’t describe what a type is (“circle” is a “shape”, etc) but rather what they can do (“circle” can have its area calculated, it can be drawn, it can be serialized, etc). Mixins in Python just so happen to be implemented by adding base classes.

    Inheritance itself isn’t really a problem. It usually only matters when you have unnecessarily deep hierarchies, where a change in a base class can change functionality in dozens of classes in an unintentional way. Similarly, it can add complexity once the hierarchy is deep enough, but only really if you throw too much into the base classes.

    Python’s ABCs are more of interfaces though, which is why despite Python using base classes to “inherit” them, a lot of that is really composition (or putting a class together from parts) rather than inheriting and overriding implementation details from a parent/grandparent/etc type.




  • I miss the days when it was simpler as well. Back before there were botnets with hundreds of thousands of compromised routers across several countries that could send tens of terabytes per second of data to your server for a sustained period of time. Back before there were thousands of bots crawling every IP and domain imaginable for exposed, abusable ports and wp-admin endpoints. Back before people started to compete in how many 9s of uptime they supported (before killing that all with LLMs anyway).

    Sadly, we can’t go back to those times. Doing so with a production service would not end well.

    The issue is not npm. Npm is a solution to a problem, even if it isn’t perfect.

    The issue is we live in a different landscape.

    Eclipse was great, having used it in the past, but its features are not exclusive to Eclipse. I can do the same inlining and extracting of code in vscode with code actions. The compile times weren’t seconds for me in the past, but they are for me now. Vite helps that even more (though that’s comparing JS to Java).


  • I agree in general with the list, but there is some stuff I disagree with still. For example, the very first section: “Work on more than one thing”.

    Like a CPU thread, if you’re responsible for multiple streams of work, you can deal with one stream getting blocked by rolling onto another one.

    This is written from the perspective of the developer, not the stakeholders. Compared to a CPU, you are a single thread. You cannot work on two things at the same time. What this is referring to is not parallelism, but a form of concurrency. Like a CPU thread, when two tasks are being executed concurrently, one task is always blocked. This means that while you, the developer, are always working, you also are always blocking at least one task, meaning you are also always blocked on at least one task.

    Instead of working on two tasks at once, pick up the second task only when the first becomes blocked.

    I believe this might be what the author was trying to convey, but the title, some wording in the section, and the bullet point at the end (“Working on at least two things at a time, so when one gets blocked you can switch to the other”) contradict that and give the impression that you should always be working on two or more things at a time.

    use as normal a developer stack as possible.

    This, I mostly agree with, but I disagree with the wording. You should be using the same tools as the rest of your team when the tool matters. However, using different Git interfaces shouldn’t matter. I’d argue the same holds true for editors as long as the editors all have the features needed for the project.

    For application work, some variety in dev environments can help you find bugs sooner even. Using different environments for development lets you test different environments naturally. For services, this is less relevant.


  • This is a super interesting approach to JS. Conceptually, it’s really cool. In practice, I don’t think I’d do it (at least for any projects I can think of) because explaining it to others would be difficult and representing complex logic as “commands” sounds a bit difficult.

    In a weird way, it reminds me of actor frameworks though. The difference is of course the separation of effects.

    One thing I wish the author would have done, though, is add some type hints. I know it’s about JS, but even some jsdoc types would have helped. It was a bit hard to know at first what the input types were to these functions.


  • Yep. This was the difference between a silent, recoverable error and a loud failure.

    It seems like they’re planning to remove all potential panics based on the end of their article. This would be a good idea considering the scale of the service’s usage.

    (Also, for anyone who’s not reading the article, the unwrap caused the service to crash, but wasn’t the source of the issues to begin with. It was just what toppled over first.)


  • monitoring how they are used is good to identify if people are actually more productive with it

    Unfortunately, many jobs skipped this step. The marketing on AI tools should be illegal.

    Far too many CEOs are promised that their employees will do more with less, so of course they give their employees more to do and make them use AI, then fire employees because the remaining ones are supposed to be more productive.

    Some are. Many aren’t.

    Like your comparison, the issue is that it’s not the right tool for every job, nor is it the right tool for everyone. (Whether it’s the right tool for anyone is another question of course, but some people feel more productive with it at times, so I’ll just leave it at that.)

    Anyway, I’m fortunate enough to be in a position where AI is only strongly encouraged, but not forced. My friend was not though. Then he used it because he had to, despite it being useless to him. Then he, a chunk of his management chain, and half his department were fired. Nobody was hired to replace them.



  • Rust currently isn’t as performant as optimized C code, and I highly doubt that even unsafe rust can beat hand optimized assembly — C can’t, anyways.

    A bit tangential, but to answer this question, nothing beats the most optimized assembly code. At best, programming languages can only hope to match the most optimized assembly.

    Rust does have macros for inlining assembly into your program, but it’s horribly unsafe and not super easy to work with.

    Rewriting ffmpeg in Rust is not a solution here (like you’re saying).