I dont get where you think it makes sense, Intels foundries cant do the nm that nvidia would need, so owning those wouldnt be of use. NVIDIA already has its own CPU/DPU/GPU architectures and all the x86 stuff isnt a help to them when they use RISC or Arm. So would you please inform me what NVIDIA would see in acquiring Intel?
Think in terms of military. x86 is dead in its present form. The GPU is a hack. The real future is a single product that can do all compute tasks. Taking Intel will make the US gov happy. It gives Nvidia a real future beyond the GPU boom of the present.
You’re thinking about the present. I’m thinking about 10 years from now where the real design edge is at. Nvidia has had a lot of luck and good leadership. Intel has had the exact opposite and is stuck in the past.
You think selling Intel to anyone would make them happy? And I was thinking of the future. And there isnt going to be a all in one chip that does it all, their could be FPGAs that have instanced layouts, but that would still require huge money to get the fastest ones (and I dont know if Intel has really been keeping that pushed as hard as it could). And I already stated they are not just in the GPU market, they have DPUs and CPUs and they are not anything that Intel is working on, so that wouldnt be a great thing to buy into.
And no Intel was the product of letting bean counters run the show and they milked it. And I wouldnt say that NVIDIA was lucky at all, they pushed and invested to get where they are at, I dont even buy their cards but the gaming sector isnt even a blip on their money radar now. They have all the networking, CPU NPU and DPU, its almost a full rack NVIDIA solution at this point. Also if I were to just take the idea of the mil into it, they are working on models, which is the AI boom that you want to look past. I am not sure what you think the mil buys that is intel based, but outside of networking and servers (which they said they were going to goto a 50/50 solution on those) there really isnt a market thing the gov buys thats Intel, TI maybe, but not Intel.
The only fundamental issue with the CPU and tensors is the L2 to L1 cache bus width. This cannot be altered and maintain the speed. This is not a real issue in the grand scheme of things. It is only an issue with the total design cycle. Don’t get sucked into the little world of marketing nonsense surrounding specific fab nodes and whatever spin nonsense the sales fools are pedaling. Real hardware takes 10 years from initial concept to first market availability. Nvidia was lucky because their plans happened to align with the AI boom. They could adjust a few minor packaging tweaks to tailor the existing designs in the pipeline to the present market, but they had no prescient genius about how AI would explode like the last two years. Such a premise assumes they began the 40 series knowing about the AI boom in 2012, nearly 4 years before OpenAI was founded.
The FPGA does not work for AI. It does not scale like you assume and the power required is untenable. You can find information about Intel/Altera AI researchers that were well funded and traversed this path before the constraints were discovered. You need simpler architecture with a lower transistor count. This is like the issue with static RAM versus DRAM. Static is functionally superior in nearly every way, but it simply can’t scale due to power and space requirements.
With tensors all that is needed is throughput. That is a solvable problem. Single thread speeds in CPUs is a sales gimmick and nothing more. Your brain is a much more powerful biological computer and operates on 3 main clocks the fastest of which is only around 100 Hz. Parallelism can be used to create an even faster and more rich user experience than the present. This is the future. The dual processor paradigm has been done before in the x286 - x386 era and it failed because data centers rejected such a dual processor paradigm in favor of slightly better hardware that was nearly good enough. This is the reality of the present too. Any hardware that is good enough to do both workloads will be adopted by data centers and therefore the market. This is where the real design edge is made and all consumer products are derived.
None of Nvidia’s products are relevant 8 years from now. They are a temporary hack. This is why they must use their enormous capital to buy a new future beyond the GPU, and they will.
Intel is getting bought by Nvidia. It is inevitable I think. It just makes sense.
I dont get where you think it makes sense, Intels foundries cant do the nm that nvidia would need, so owning those wouldnt be of use. NVIDIA already has its own CPU/DPU/GPU architectures and all the x86 stuff isnt a help to them when they use RISC or Arm. So would you please inform me what NVIDIA would see in acquiring Intel?
Think in terms of military. x86 is dead in its present form. The GPU is a hack. The real future is a single product that can do all compute tasks. Taking Intel will make the US gov happy. It gives Nvidia a real future beyond the GPU boom of the present.
You’re thinking about the present. I’m thinking about 10 years from now where the real design edge is at. Nvidia has had a lot of luck and good leadership. Intel has had the exact opposite and is stuck in the past.
You think selling Intel to anyone would make them happy? And I was thinking of the future. And there isnt going to be a all in one chip that does it all, their could be FPGAs that have instanced layouts, but that would still require huge money to get the fastest ones (and I dont know if Intel has really been keeping that pushed as hard as it could). And I already stated they are not just in the GPU market, they have DPUs and CPUs and they are not anything that Intel is working on, so that wouldnt be a great thing to buy into.
And no Intel was the product of letting bean counters run the show and they milked it. And I wouldnt say that NVIDIA was lucky at all, they pushed and invested to get where they are at, I dont even buy their cards but the gaming sector isnt even a blip on their money radar now. They have all the networking, CPU NPU and DPU, its almost a full rack NVIDIA solution at this point. Also if I were to just take the idea of the mil into it, they are working on models, which is the AI boom that you want to look past. I am not sure what you think the mil buys that is intel based, but outside of networking and servers (which they said they were going to goto a 50/50 solution on those) there really isnt a market thing the gov buys thats Intel, TI maybe, but not Intel.
The only fundamental issue with the CPU and tensors is the L2 to L1 cache bus width. This cannot be altered and maintain the speed. This is not a real issue in the grand scheme of things. It is only an issue with the total design cycle. Don’t get sucked into the little world of marketing nonsense surrounding specific fab nodes and whatever spin nonsense the sales fools are pedaling. Real hardware takes 10 years from initial concept to first market availability. Nvidia was lucky because their plans happened to align with the AI boom. They could adjust a few minor packaging tweaks to tailor the existing designs in the pipeline to the present market, but they had no prescient genius about how AI would explode like the last two years. Such a premise assumes they began the 40 series knowing about the AI boom in 2012, nearly 4 years before OpenAI was founded.
The FPGA does not work for AI. It does not scale like you assume and the power required is untenable. You can find information about Intel/Altera AI researchers that were well funded and traversed this path before the constraints were discovered. You need simpler architecture with a lower transistor count. This is like the issue with static RAM versus DRAM. Static is functionally superior in nearly every way, but it simply can’t scale due to power and space requirements.
With tensors all that is needed is throughput. That is a solvable problem. Single thread speeds in CPUs is a sales gimmick and nothing more. Your brain is a much more powerful biological computer and operates on 3 main clocks the fastest of which is only around 100 Hz. Parallelism can be used to create an even faster and more rich user experience than the present. This is the future. The dual processor paradigm has been done before in the x286 - x386 era and it failed because data centers rejected such a dual processor paradigm in favor of slightly better hardware that was nearly good enough. This is the reality of the present too. Any hardware that is good enough to do both workloads will be adopted by data centers and therefore the market. This is where the real design edge is made and all consumer products are derived.
None of Nvidia’s products are relevant 8 years from now. They are a temporary hack. This is why they must use their enormous capital to buy a new future beyond the GPU, and they will.