Test scores across OECD countries peaked around 2012 and have declined since. IQ scores in many developed countries appear to be falling after rising throughout the twentieth century. Nataliya Kosmyna at MIT’s Media Lab began noticing changes around two years ago when strangers started emailing her to ask if using ChatGPT could alter their brains. She posted a study in June tracking brain activity in 54 students writing essays. Those using ChatGPT showed significantly less activity in networks tied to cognitive processing and attention compared to students who wrote without digital help or used only internet search engines. Almost none could recall what they had written immediately after submitting their work. She received more than 4,000 emails afterward. Many came from teachers who reported students producing passable assignments without understanding the material. A British survey found that 92% of university students now use AI and roughly 20% have used it to write all or part of an assignment. Independent research has found that more screen time in schools correlates with worse results. Technology companies have designed products to be frictionless, removing the cognitive challenges brains need to learn. AI now allows users to outsource thinking itself.

  • SocialMediaRefugee@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    20 hours ago

    I could say anything is a crutch that removes the burden from our minds. Calculators remove the burden of doing basic math and map apps remove the burden of maintaining a mental map. Both of these can result in a person who can’t independently do basic calculations or navigate and won’t understand the methodology behind the calculations. Now if this is a problem is open to debate.

    Now using AI to avoid learning is a problem since it results in fraud. “I claim I understand this but I don’t.”

    • batmaniam@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      44 minutes ago

      This is a great conversation because I’m one of those people who’s terrible at arithmetic, but quite good at math. As in: I can look at a function, visualize it in 3D space, see what different max, mins and surfaces are dominated by what terms etc, but don’t ask me to tally a meal check. I’d be useless at applying any math without a calculator.

      Similarly, there’s a lot of engineers out there that use CAD extensively that would probably not be engineers if they had to do drafting by hand.

      The oatmeal did a comic that distilled this for me where they talked about why they didn’t like AI “art”. They made the point that in making a drawing, there are a million little choices made reconciling what’s in your head with what you can do on the page. Either from the medium, what you’re good at drawing, whatever, it’s those choices that give the work “soul”. Same thing for writing. Those choices are where learning, development, and style happen, and what generative AI takes away.

      That helped crystalize for me the difference between a tool and autocomplete on steroids.

      Edit: to add: you’re statement “I claim to understand but don’t” hits it on the head and is similar to why you have to be careful if plagiarism in citing academic review papers. If you write YOUR paper in a way that agrees with the review but discuss the paper the review was referencing, and, even accidentally, skip over that the conclusion you’re putting forward is from the review, not the paper you’re both citing, that’s plagiarism. Notion being you misrepresented their thoughts as your own. That is basically ALL generative AI.