I think it’s quite possible to become confused if you’re used to software that, bugs aside, behaves pretty much completely predictably, then get a feature marketed as “intelligence” which suddenly gives you unpredictable and sometimes incorrect results. I‘d definitely be confused if the reliable tools I do my work with suddenly developed a mind of their own.
It’s not, it is after all entirely true. The actual blame for the executive is that they don’t understand that the consumer is ignorant beyond reasoning and dumb as rocks.
They are trying to push a product that’s unreliable to an idiot, and anyone with two brain cells knows not to trust it.
Making it a twice failed concept from the start
A good product should be useable by even a fool and malleable enough to be used by anyone.
It’s just a softer thing to say than ‘a lot of people hate AI and it’s alienating potential customers’. They can’t come out and say that out loud, they don’t want to piss off Microsoft too much and they aren’t going to try to do NPU-free systems (it’s not really possible). They aren’t going to do anything to ‘fight back’ against the AI that people hate (they can’t), so their best explanation as to why they pull back from a toxic brand strategy is that ‘people just don’t care’ rather than ‘people hate this thing that we are going to keep feeding’.
But if they need to rationalize the perspective, an “AI” PC does nothing to change the common users experience with the AI things they know, does not change ChatGPT or Opus or anything similar, that stuff is entirely online. So for the common user, all ‘AI’ PC means is a few Windows gimmicks that people either don’t care about or actively complained about (Recall providing yet another way for sensitive data to get compromised).
In terms of “AI” as a brand value, the ones most bullish about AI are executives that like the idea of firing a punch of people and incidently they actually want to buy fewer PCs as a result. So even as you can find AI enthusiastic people, they still don’t want AI PCs.
For most people, their AI experience has been:
News stories talking about companies laying off thousands or planning to lay off thousands for AI, AI is the enemy
News stories talking about some of those companies having to rehire those people because AI fell over, AI is crap
Their feeds being flooded with AI slop and deepfakes, AI is annoying
Their google searches now having a result up top that, at best, is about the same as clicking the top non-sponsored link, except that it frequently totally botches information, AI is kind of pointless
For those that have actually positive AI experience, they already know it has nothing to do with whether the PC is ‘AI’ or not. So it’s just a brand liability, not a value.
Confuses them?
I think it’s quite possible to become confused if you’re used to software that, bugs aside, behaves pretty much completely predictably, then get a feature marketed as “intelligence” which suddenly gives you unpredictable and sometimes incorrect results. I‘d definitely be confused if the reliable tools I do my work with suddenly developed a mind of their own.
Well, that certainly would confuse you, yes.
Have you recently vibe editted a Microsoft Copilot Excel Sheet on your AI PC (but actually in the cloud)?
🤮
The majority of computer users aren’t particularly computer savvy.
I suppose, it just seemed like putting the blame on the consumers rather than greedy, short-sighted executives.
It’s not, it is after all entirely true. The actual blame for the executive is that they don’t understand that the consumer is ignorant beyond reasoning and dumb as rocks.
They are trying to push a product that’s unreliable to an idiot, and anyone with two brain cells knows not to trust it.
Making it a twice failed concept from the start
A good product should be useable by even a fool and malleable enough to be used by anyone.
It’s just a softer thing to say than ‘a lot of people hate AI and it’s alienating potential customers’. They can’t come out and say that out loud, they don’t want to piss off Microsoft too much and they aren’t going to try to do NPU-free systems (it’s not really possible). They aren’t going to do anything to ‘fight back’ against the AI that people hate (they can’t), so their best explanation as to why they pull back from a toxic brand strategy is that ‘people just don’t care’ rather than ‘people hate this thing that we are going to keep feeding’.
But if they need to rationalize the perspective, an “AI” PC does nothing to change the common users experience with the AI things they know, does not change ChatGPT or Opus or anything similar, that stuff is entirely online. So for the common user, all ‘AI’ PC means is a few Windows gimmicks that people either don’t care about or actively complained about (Recall providing yet another way for sensitive data to get compromised).
In terms of “AI” as a brand value, the ones most bullish about AI are executives that like the idea of firing a punch of people and incidently they actually want to buy fewer PCs as a result. So even as you can find AI enthusiastic people, they still don’t want AI PCs.
For most people, their AI experience has been:
For those that have actually positive AI experience, they already know it has nothing to do with whether the PC is ‘AI’ or not. So it’s just a brand liability, not a value.