Geekbench is the most useless single and multicore bench in existence. If I wanted to know how fast my computer opens a browser window it would be great though.
Why do you say that? GB seems like a relatively balanced benchmark.
I also like Cinebench, but I find it’s better for specific use cases. I have my own DIY CPU gaming benchmark via running an old single thread game (Cities in Motion 1) with free look and custom map size that stretches the limits of the engine, but that a personal thing.
Geekbench is mostly a mobile CPU bench, barely stresses cache and doesn’t scale well with nT. It tells you how good your CPU is at opening firefox or Safari.
For multi-core I would agree, but it seems like a viable benchmark. ST is critical for things like day-to-day application use, gaming and so on…
If geekbench were a good representation of 1T, just using gaming as an example, the 9950x3D or 9800x3D would be absolutely top dog. Geekbench measures 1T execution, which already strongly benefits Apple SoCs in the score because of the on chip RAM. Furthermore, it barely scratches the limite of L2 and L3 caches of modern processors, so, like I mentioned before, it’s pretty useless if you want to measure anything beyond how fast firefox starts from a clean user session.
Geekbench…that’s enough for me to stop reading. Garbage benchmarking. I wish people would quit using it already.
The original title “Apple M5 chip smashes Snapdragon X2 Elite in early single-thread benchmarks — single core scores rival Intel’s Core Ultra 9 285K and beat AMD’s 9950X3D, teasing multi-core potential of future variants” is misleading.
GB6 ST results:
- Apple M5: 4,263 (MacBook)
- Snapdragon X2 Elite: 4,080 (the result is likely misleading, as Qualcomm likes to post early results that can never be replicated in real world products, see the first X Elite results)
That being said 4,263 verses 4,080 is a mere 4.3% uplift, within the margin of error. I don’t how other people approach benchmarks, but I consider anything below 5% to be irrelevant. You want at least high single digit uplift or more realistically double digit uplift to notice a difference.
Tomshardware recently released a premium subscription. That’s fair, I think the best option is to pay directly for news sources. But, if you want people to pay you directly you must avoid these sort of scam-like, sensationalist headlines and show a measure of respect for your paying audience.
Yeah, “smashes” by 4.3% is funny as well.
It’s also only the base M4.
If you’re an Intel guy, you likely know i3, i5, i7, and i9. The Apple equivalents are [base], Pro, Max, and Ultra. (You would think Max would be the top one, but the easy way to remember it is to look at the brand, it’s Apple so it doesn’t have to make sense. Therefore, Ultra is above Max.)
I think it’s more like Pro is two base chips, Max is two Pro chips (four base chips), and Ultra is two Max chips (eight base chips) or something like that. They just scale up the cores, but they might do more than just scale up the core count. I’m not sure though, but I know they’ve said in the past Ultra is just two Maxes mashed together, or they said it about one generation… or maybe they said it was like two Maxes mashed together.
So if base M5 is just 4.3% better than the top dog in Windows/Android computing, wait to see what the Pro, Max, and Ultra do.
I’ve heard it said that M5 Ultra might touch [Nvidia GeForce] 5090 performance, and that’s just nuts to me. You’ll also spend around $5k or more on it, and I don’t think you need quite that much to build a Windows PC around a 5090. So comparing a Mac chip to a gaming GPU is kinda laughable. Yeah we just got Cyberpunk (I’m a Mac user — I have an M2 Pro Mac and a base M2 MacBook Air) and that’s kind of a big deal, it plays, kinda looks like ass a bit, I’d rather play on Xbox, and I’m more excited about Blue Prince… but no one buys a Mac for gaming.
So let’s be real. These numbers aren’t for gaming. They’re for AI. And Apple Intelligence is still in beta, still coming soon… and yet, when they doubled the speed of the M5’s storage over the M4, they said it was to load LLMs faster. But they still aren’t taking Apple Intelligence seriously, and they’re hemorrhaging talent to the competition. But no one really buys a Mac for AI, either. They’re building computers with beefy GPUs for that. People buy Macs for art, for creative production… or just because they don’t want to be Windows users (and for whatever reason don’t want to delve into Linux).
Not an expert on Apple’s CPU, but we are looking at single-thread results, I believe single thread results do not really scale across Apple’s computer CPU brand portfolio.
AFAIK, Geekbench scores are extremely CPU-specific and are not really relevant for GPU compute performance. We would need a different set of benchmarks for that.
That would be wild if a SoC approaches 5090 performance. In this Blender benchmark here, it shows a M3 Ultra with 80 cores being similar to a 5070 Ti, though you’re going to pay several times the price for the M3 machine. At this rate, it’s quite possible that SoCs will make discrete GPUs the less practical choice for most GPU-intensive workloads in the not too distant future, though the opposite is true today, even despite the silly power requirements of top-end NVIDIA GPUs. I think NVIDIA is especially digging themselves a hole with the VRAM nonsense, and we will all rejoice when we can run GPU workloads with 64Gi of shared, cheap RAM. It would certainly be ideal if other competitors could develop equally powerful chips, though, since being stuck in Apple’s walled garden is a fairly undesirable tradeoff.
Nvidia isn’t standing still, they made the SoC for the DGX Spark, so they’re definitely going to be ready if the market shifts that way. I heard there might be laptops using that SoC too.
To be clear, you’re only stuck in Apple’s walled garden in any meaningful sense on a couple Apple devices. The Apple Watch, for example, doesn’t work with anything but an iPhone. It might do some basic stuff with Android OEMs, but it won’t do as much. AirPods work on Android, but not all features. iPhones can’t officially sideload (there are ways). Mac is wide open. Apple TV (the box) doesn’t care what phone you have, though it has tighter speaker integration with HomePods.
I feel like those who say “walled garden” are speaking on something very specific or they aren’t very savvy. I’ve never felt limited by Apple’s “walls”. I use their tech because they’re the best for what I need, for the most part. I also have an Android phone (and it’s 5 years older than my iPhone), and there are a couple things it does better. Like the keyboard. While I do acknowledge that the “walled garden” means I have to give advantages to Apple tech because it talks to Apple tech (and nobody else really has this kind of ecosystem), it doesn’t do much to stop me from going outside of it. For example, my wife has an Android phone (and only the one) and she has just as good an experience on the Macs and Apple TV. She does also have an iPad though, but she’s not locked into the ecosystem, and she’s far less tech savvy than I am. She uses a Mac because that’s what I bought. She doesn’t care as long as it runs Firefox.
I have 2 Linux boxes for different gaming purposes but macOS/Apple on everything else. It’s the ecosystem. There just isn’t any native support for Android in Linux (yet). Things stopping me from switching:
- performance per watt for laptops
- native texting app (similar to iMessage that uses cellular numbers) on Linux (RCS desktop client)
- scrcpy needs to be less obtuse
- Linux theming needs to be more resilient to OS upgrades (this may be wrong, but it’s been a bit difficult to see how themes won’t be broken between major KDE/gnome updates)
- phone notifications on desktop through btLE (I’m not always on a wifi network)
- continuity between form factors
- voice assistant support (I’d LOVE Gemini for desktop)
- BettertouchTool
- PastePal
Out of everything, the last three are the most important for me because I depend on those apps every day. Linux is just so damn close, and KDE connect is just a few features away from being perfect.
Can’t you text on Android from a computer via Google’s site? I seem to remember reading about that. I’ve never tried it, though — and it would be through a web browser (and likely requiring a Google account/login, though the latter could be said about iMessage/Apple account).
Yeah, Google messages works great from the web. I just really want a native client for RCS messages; granted Google was supposed to open RCS for others to implement but never did. I could totally live with a web app, but it’s still a painpoint nonetheless.
What I’m thinking being that this is Linux and I feel like that gives you more options — some kind of dedicated browser or like a web app. So it could be done. Like I always have iMessage open on my desktop. I have a loose grid of apps I keep open. I don’t see my wallpaper. I wonder if you could do that with a web app or a web view, like no toolbars, just the page in a window.
Oh for sure, I could always use a web wrapper app for Google messages and keep it in a workspace like I do with iMessage. There’s a bunch of them out there that turn websites into desktop apps.