Caveat - China Daily is owned and operated by the Chinese government/CCP. But the article is interesting in itself, and its official endorsement is interesting, too.
I’m still surprised at the rate LLMs make simple mistakes. I was recently using ChatGPT to research biographical details about James Joyce’s life, and it gave me several basic facts (places he lived & was educated at) at variance with what is clearly stated in the Wikipedia article about him.
I wonder will the US & EU bifurcate on AI adoption for government and administration, with the EU opting for open-source?
US models don’t seem interested in complying with EU law like the AI Act or GDPR.
If so, 5 or 10 years down the line this could lead to very fundamental differences in how the two territories are governed. There may all sorts of unexpected effects arising from this.
The person making this claim, Miles Brundage, has a distinguished background in AI policy research, including being head of Policy Research at OpenAI from 2018 to 24. Which is all the more reason to ask skeptical questions about claims like this.
What economists agree with this claim? (Where are citations/sources to back this claim?)
How will it come about politically? (Some countries are so polarised, they seem they’d prefer a civil war to anything as left-wing as UBI).
What would inflation be like if everyone had $10K UBI? (Would eggs be $1,000 a dozen?)
All the same, I’m glad he’s at least brave enough to seriously face what most won’t. It’s just such a shame, as economists won’t face this, we’re left to deal with source-light discussion that doesn’t rise much above anecdotes and opinions.
Former OpenAI researcher says a $10,000 monthly UBI will be ‘feasible’ with AI-enabled growth
its eligibility criteria to those with Italian parents or grandparents.
That’s the existing criteria for Irish passports. I’d guess the number of Americans with one grandparent born in Ireland or Italy must run to 10s of millions.
In terms of advancing software, its extremely inefficient,
It amazes me how their BS on ‘innovation’ has infected broader culture and politics.
Look how little fundamental innovation there is in health, education and housing. All getting more expensive and out of reach.
Their hope seems to be to invent something proprietary and hypey that gets them bought up, not to actually build something functional.
They all seem to be chasing the dream of being unicorns (for the unintiated reading this, monopolist giants like Google/Meta, not magical horses).
Do American VCs even bother with start-ups who want to be small/medium sized firms, and have a solid case for making a few hundred million dollars every year?
Yes I did, and corrected it.
Oddly, 2024 new industrial robot numbers dropped for each of the EU, Japan and the US, too from the year before. Robot manufacturing means cheaper goods, and the EU, Japan & the US are already feeling the crunch. They don’t seem to have any answer to the flood of good quality cheap electric vehicles that have made China the world’s biggest car maker. These pressures are only going to get worse and worse.
2024 New Industrial Robots
290,000 - China
86,000 - EU
43,000 - Japan
34,000 - US
Chinese factories keep up robot roll-out despite global decline
I’m glad this helps people with paralysis, but I can’t help seeing the sci-fi dystopian side of tech like this.
What if some people are forced to have their inner thoughts decoded against their will? It sounds like just the thing some authoritarian thought police would use to root out their enemies.
Does that sound far-fetched? I’m sure if it were suggested as an upgrade to existing lie-detecting polygraph tests, lots of people would approve. Slippery slope.
If you think of it as a pet alternative, its not so expensive. Food & vets bills for cats & dogs can easily be $1000 per year.
Here’s a few; there’s many more.
AI deception: A survey of examples, risks, and potential solutions
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
Compromising Honesty and Harmlessness in Language Models via Deception Attacks
The Traitors: Deception and Trust in Multi‑Agent Language Model Simulations
Detecting Malicious AI Agents Through Simulated Interactions
The idea being pushed forth by YOUR link is that there is a concerted effort by an “AI” to push something subliminal.
Your assertion is contradicted by real world facts. There is lots of research showing AI engaging in deceptive and manipulative behavior.
Now it has another method to do that. As the article points out, we don’t why it’s doing this. But that’s not the point. The point is it can, without us knowing.
Subliminal refers to stimuli that are presented below the threshold of conscious perception, meaning they are not consciously recognized but can still influence the mind or behavior
It’s not subliminal to the AI, but then again, AI isn’t analogous to human brains. But it is correct to say its subliminal to the humans building and designing the AI.
Interestingly in Game Theory, when everyone can lie and go undetected, its almost always bad outcomes for everyone, that range from inefficiency to collapse.
I think you can find ethically good, bad and gray uses for AI.
The top commenter here mentions Youtube content creators using it. Most of them are on YT to make money. So its a rational smart choice to let AI do your writing, if it makes you more efficient and means you can earn more.
Sounds more like YouTube “content producers” are likely using AI to generate the words they read aloud.
I’ve noticed this too, and it sounds like a an example of what Marshall McLuhan was talking about when he said "The Medium is the Message”. The form of a medium (e.g., TV, print, digital) has a more profound effect on society than the actual content it carries.
Stupider people with weaker senses of self are more likely to use chatgpt.
No. AI use correlates with being younger and more educated.
It would have been more accurate to say well-paying jobs for all of them.
Won’t there be insurance for this?
If companies like FedEx can bear the cost of liabilities for huge numbers of human drivers, doesn’t that suggest the burden will be far less for robo-vehicle car companies?