When I was young and starting out with computers, programming, BBS’ and later the early internet, technology was something that expanded my mind, helped me to research, learn new skills, and meet people and have interesting conversations. Something decentralized that put power into the hands of the little guy who could start his own business venture with his PC or expand his skillset.
Where we are now with AI, the opposite seems to be happening. We are asking AI to do things for us rather than learning how to do things ourselves. We are losing our research skills. Many people are talking to AI’s about their problems instead of other people. And they will take away our jobs and centralize all power into a handful of billionaire sociopaths with robot armies to carry out whatever nefarious deeds they want to do.
I hope we somehow make it through this part of history with some semblance of freedom and autonomy intact, but I’m having a hard time seeing how.


Every notable invention associated with language (and communication in general) has elicited similar reactions. And I don’t think Plato is wholly wrong, here. With each level of abstraction from the oral tradition, the social landscape of meaning is further externalized. That doesn’t mean the personal landscape of meaning must be. AI only does the thinking for you if that’s what you use it for. But I do fear that that’s exactly what it will largely be used for. These technologies have been coming fast since radio, and it doesn’t seem like society has the time to adapt to one before the next.
There’s a relevant Nature article that touches on some/most of this.
I see these thought-terminating cliches everywhere, and nowhere do their posters pause a moment to consider the specifics of the actual technology involved. The people forewarning about this stuff were correct about, for instance, social media, but who cares because Plato wasn’t a fan of writing, we rode on horses before in cars, or the term Luddite exists…etc. etc.
I talked about the way in which Plato’s concerns were valid and expressed similar fears about misuse. The linked article is about how to approach the specific technology.
You didn’t say his concerns were valid. You said you thought he was not “wholly wrong”. Regardless, Plato being a crank about writing proves only that cranks existed before writing. It does nothing to help you interrogate nor help set you down the path to interrogate the problems mentioned (which is why I categorized it as a thought terminating cliche).
Your referenced article is basically a long-form version of your post, which has a perceivable bias toward the viewpoint that every newly-introduced technology can or will inevitably result in “progress” for humanity as a whole regardless of the methods of implementation or the incentives in the technology itself.
Far from being an instance of skub (https://pbfcomics.com/comics/skub/) as trumpeting this perspective – perhaps unknowingly – implies that it is (i.e. an agnostic technology / inanimate object that “two sides” are getting emotionally charged about), LLMs (and their “agentic” offspring) are both deliberately and unwittingly programmed to be biased. There are real concerns about this particular set of technologies that posting a quote from an ancient tome does not dismiss.
I mean, it sounds like you’re mirroring the paper’s sentiments too. A big part of Clark’s point is that interactions between humans and generative AI need to take into account the biases of the human and the AI.
And as I am not, Clark is not really calling Plato a crank. That’s not the point of using the quote.
I don’t think anyone is claiming that new technology necessarily leads to progress that is good for humanity. It requires a great deal of honest effort for society to learn how to use a new technology wisely, every time.
Maybe you are not intending it, but your usage of the quote comes across as the same, thought-terminating cliche that is basically summarized in the partial citation of the bible of “there is nothing new under the sun”.
You’re not saying Plato was a crank, but I am. He definitely had some wisdom to impart about things (especially given his time and place in history), but his remarks about writing are ridiculous and crank-like (and made even more ridiculous based upon the fact that we only know what they are because someone wrote them down).
The paper waffles around a bit as to whether or not the result will be overall “good”, and tries to be as adept at fence sitting as Dwight Shrute from the Office (https://getyarn.io/yarn-clip/6b3c335d-fd65-4db0-aa70-01c70f312b5a) but the position was made very apparent even from a short skim of the article as well as the way you’re continually referencing it here.
I’d argue that a critical eye toward a specific new technology does not require someone to proceed back through time immemorial and compare it to the naysayers of the invention of the wheel.
Since you seem to have an affinity for Greek philosophers:
“It is the mark of an educated mind not to believe everything you read on the Internet.” - Aristotle
If you put [brackets] around the word before your (parened link), it’ll make it an actual link.
Eh, I prefer people to know where they’re going before clicking without having to hover first.