Every day, someone in a position of power tries ChatGPT for the first time and goes “Holy shit! The computer is actually talking to me! This is the biggest thing since the invention of the telegraph!”
Then they start writing memos and press releases without actually spending the other 60 minutes using it that it takes the rest of us to realize “oh it’s actually just full of shit.”
ELIZA effect in full swing.
The surest sign that someone’s job needs to be deleted is if they feel their job can be done by AI. If your work can be done by an LLM, you’re simply not doing work that’s worth doing.
I disagree. If anyone genuinely thinks that their own job can be done by an LLM, either:
A) that job should be streamlined out of existence
B) they have a fundamental misunderstanding of what an LLM is capable of achieving, such as if their job involves meaningless drudgery, and they don’t know enough about LLMs to realise that LLMs don’t even have the capacity for thought necessary to consistently follow simple heuristics without threatening nuclear annihilation.
Imagine being retirement age and being in a position of power to issue big press releases like this. I bet it feels really great.
I’ll trust an actual Computer Scientist who knows how Machine Learning works before I trust an economist on the subject of AI.
Dr. Mike Pound from Computerphile had a great interview with Cyber Security Youtuber David Bombal on this topic and it was nice to have someone break down the hype and reality around AI so concretely. Here is said interview.
EDIT: spelling, wording.
Mike Pound is amazing
There’ll be no shortages of the job offers in the trenches with how the things are going
Yeah nah. The tech is pretty impressive but it can’t replace more entry level jobs than can be achieved with pre-ai technology. Treat it like a tool and find use cases that make sense. I’d like to see small, efficient, specialized local models to help with doing basic or repetitive stuff.
It’s so bizarre working in software development for a non-tech company.
Management is like “can you use it to automate X?” And my answer is almost always “No. It will do an unreliable job of that. But if you want X automated just TELL me that’s what you want and I can seriously automate it for you in a day or two by just writing a tool”
Nope.
It blows my mind how much Management doesn’t give two fucking shits about the RESULT. They ONLY want to be able to tell shareholders that something was accomplished USING AI.
Oh, and for what it’s worth… since I’ve been at this company, I’ve had the same question asked of:
-
The Blockchain
-
The Metaverse
Getting questions which are solutions in search of a problem has been a harbinger of a hype train heading for a derailment
I feel you. Can’t wait for AI to go the way of the blockchain :)
-
My work deals with parts that get damaged in shipping pretty often, and every shipper has different ways they want things formatted and different asinine ways that descriptions make sense that it’s a pain to describe damage so that it doesn’t turn into an email chain… So our IS team trained a model to take in 3-4 images of part damage and what shipper it’s from and it generates everything for us to review and send. Saves a bunch of busywork for exactly that.
That’s a good use case! Provided it doesn’t hallucinate something, but you can always have automated validation steps.
deleted by creator



