• 0 Posts
  • 151 Comments
Joined 1 year ago
cake
Cake day: July 10th, 2024

help-circle

  • Zacryon@feddit.orgtoAsklemmy@lemmy.mlWhat's a Tankie?
    link
    fedilink
    arrow-up
    16
    arrow-down
    5
    ·
    13 days ago

    Tankie is a pejorative label generally applied to authoritarian communists, especially those who support or defend acts of repression by such regimes, their allies, or deny the occurrence of the events thereof. More specifically, the term has been applied to those who express support for one-party Marxist–Leninist socialist republics, whether contemporary or historical. It is commonly used by anti-authoritarian leftists, anarchists, libertarian socialists, left communists, social democrats, democratic socialists, and reformists to criticise Leninism, although the term has seen increasing use by liberal and right‐wing factions as well.

    https://en.wikipedia.org/wiki/Tankie








  • Yes, a computer can currently be made to appear spiteful and horny. But in theory (more than in practise currently) a computer can also be actually made spiteful and horny instead of just appearing like it.

    Drawing an analogy: what are our brains if not a biochemical computer? Being horny or spiteful are then either emergent due to the computer’s structure, or even directly encoded via its architecture.

    A ‘computer’ is first and foremost a theoretical concept that can be realized in different ways. We already have built computers from various elements, not restricted to analog or digital ones. We have also made purely mechanical computers, we’ve incorporated biological elements into digital computers and biological computers are already a thing. See for example: https://en.wikipedia.org/wiki/Biological_computing




  • There was a similar study / survey by Microsoft (idk anymore if it was really them) recently where similar results where found. In my experience, LLM based coding assistants are pretty okay for low level complexity tasks, creating boilerplate code, especially if it does not require deeper understanding of the system architecture.

    But the more complex the task becomes, the harder they start to suck and fail. This is where the time drag begins. Common mistakes or outdated coding approaches are also used rather often instead of newer standards. The deviations from the given instructions are also happening way too often. And if you do not check the generated code thoroughly, which can happen if the code “looks okay” on first glance, then finding bugs and error sources due to this can become quite cumbersome.

    Debugging is where I have wasted most of my time with AI assitants. While there is some advantage in having a somewhat more capable rubber duck, it is usually not really helpful in fixing stuff. Either the error/bug sources are completely missed (even some beginner mistakes) or it tries to apply band-aid solutions rather than solving the cause or, and this is the worst of all, it is very stubborn about the alleged problem cause (possibly combined with forgetting earlier debugging findings, resulting in a tedious reasoning and chat loop). I have found myself more often than I’d like to arguing with the machine. Hallucinations or unfounded fix hypotheses make this regularly worse.
    However, letting the AI assistant add some low level debug code to help analyze the problem has often been useful in my experience. But this requires clear and precise instructions, you can’t just hope the assistant will cover all important values and aspects.

    When I ask the assistant to logically go through some lines of code step by step, possibly using an example, to nudge it towards seeing how it’s reasoning was wrong, it’s funny to see, e.g. with Claude, how it first says stuff like “This works as intended!” and a moment later “Wait… this is not right. Let me think about it again.”

    This becomec less funny for very fundamental stuff. There were times where the AI assistant told me that 0.5 is greater than 0.8 for example, which really shows the “autocorrect on steroids” nature of LLMs rather than an active critical thinking process. This is bad, obvio. But it also makes jobs for humans in various fields of IT safe.

    Typing during the whole conversation is naturally also really slow, especially when writing more than a few sentences to provide context.

    Where I do find AI assistants in coding mostly useful, is in exploring APIs that I do not know so well, or code written by others that is possibly underdocumented. (Which is unfortunately really common. Most devs don’t seem to like writing documentation.)
    Generating documentation for this or my own code is also pretty good most cases but also tends to contain mistakes or misses important mechanisms.

    Overall in my experience I find AI assistance useful and see a mild productivity speed boost for very low level tasks with low complexity and low contextual knowledge requirements. They are useful for exploring code and writing documentation, but I can not really recommend them for debugging. It is important to learn and know how to use such AI tools precisely in order to save time instead of wasting time, since as of now they are not really capable of much.








  • Problem is that you are usually not getting much of the company’s profit share since the ones higher up the hierarchy put most of the profits into their own pockets while virtually exploiting the labor of others. So you could actually benefit from a “everyone get’s the same amount” policy.