• talkingpumpkin@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      2 days ago

      Hopefully, in time, people will learn that articles about LLM-generated stuff are as interesting as articles about what autocomplete suggestions vscode gives for specific half-written lines of code

    • codeinabox@programming.devOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      2 days ago

      Even when I share these articles in the AI community, they get voted down. 🫤 I know these articles aren’t popular, because there is quite a lot of prejudice against AI coding tools. However, I do find them interesting, which is why I share them.

        • codeinabox@programming.devOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          I’m open to a conversation discussing the pros and cons of large language models. Whilst I use AI coding tools myself, I also consider myself quite a sceptic, and often share articles critical of these tools.

          • ulterno@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 hours ago

            Tools that are closer to logic, are better for helping with coding. So an expert system is better than a Neural Network for making code helper tools, although its output would be more limited and wouldn’t take human language input.

            Using an LLM for this stuff means telling humans to not put the effort into making logic, hence “reducing their cognitive load”, but instead using something that takes a lot more energy (as in fuel) to make logic.

            What we are currently calling AI, is a fuzzy system, abstracted onto a logical system. And now we are trying to make that abstracted fuzzy system make another abstraction on top of it, that does logic. This is vs the human brain, which is a fuzzy system made directly out of chemical (and quantum, as some studies would state) processes, just creating a logical system on top of it.


            Each leveI of abstraction has a cost.

            1. If you make an lC with a fixed instruction flow (i.e. it does only a single thing), it won’t have to load instructions and will only load data and parameters, which wilI make it much more efficient at that specific process.
            • ln this case, the Ioading of variabIe data and parameters wiII be the sIowest part
            1. Now, you can specify a set of instructions, which are then implemented in hardware. Then, when you load instructions from a variable input (ROM, perhaps) you get to change the instruction flow can be changed on the fIy, but now the Ioaded instructions are an abstraction and are actuaIIy Ioaded parameters.
            • ln this case, loading of instructions becomes as sIow as parameters and then you see preloading/prefetching (and further, branch prediction) to make this part faster

            One nice example of abstractions is interrupts:

            • EarIier you had polIing, which meant that the CPU would have to check the corresponding data Iine every n clocks, determined by the polling rate and this would have to be written by the software programmer.
            • With interrupts, you now have a separate unit doing the poIling (which is much more efficient, because that is the only thing it is made for) and storing any state changes an a buffer, which then, depending upon the type of interrupt, can be taken by the program (which is stiIl polling, but at a much slower rate) and acted upon or may be done using interrupt routines.
            • There is a similar thing you do in case of muItithreaded code. Where if there is a Ioop running in thread A that needs to be interrupted by B. B will change the value of some variable, which can be checked by A. Now if there is a Ianguage that simplifies this interruption process, there will be some runtime that is doing a similar job for you. This will be another level of abstraction, which will require extra effort on runtime.

            One of the heaviest examples of abstraction l can think of, is what is done by the programs that simulate other processors. Things like the tools provided by FPGA manufacturers that let you emulate the logic inside the processor and Mentor tools which even have simulation starting from the user, designing the transistors.
            These are usually not used anywhere other than cases of testing, debugging, prototyping and the sort.
            Virtual Machines made for emulation are a bit different from these, but are pretty heavy nonetheless and one won’t consider, say emulating a Nintendo 3Ds on a hardware of similar performance for daily use.

          • itkovian@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 day ago

            LLMs are just models that predict series of tokens (words) that is probabilistically related to the query. It is not exactly meant to write entire programs, with or without human intervention. At best, it can generate boilerplate or maybe simple stuff. But, it can never replace human programmers.