The way AI performance is measured now is all over the place
Intel for example is saying its next gen mobile system, called lunar lake, will be 100 tops.. way more than AMD's 25 right? The way Intel calculates is a 100 is they take the NPU performance then added the CPU cores performance, then add the GPU core performance and say hey guys it's a 100tops when you take the entire systems components into account
And then other companies like AMD just give you the NPU number, the 25 excludes the CPU core and GPU core AI performance and the other problem is there can be variance in the measurement of tops - for example some use INT4, some use INT8 etc, so there is no standard of what "1 TOPS AI performance" is
This is typical Intel tho, i don't know what the TOPS performance is if you include GPU and CPU neural emulation along with the NPU, i have 400 TOPS to drive Copilot, at 250 watts... and that's the point, an NPU to drive Copilot is supposed to be done at a couple of watts, an NPU is a dedicated limited instruction architecture specifically designed to accelerate a neural net at very low power, that's all it does, its not capable of doing anything else.
Anyone can lump together any bit of hardware capable of AI, a 5 year old CPU is capable of it, and call that your TOPS throughput, but its bastardising the whole point of AI capable chips driving AI capable operating systems, When people do that for marketing reason which only sound good to the informed boasting bigger numbers that are complete BS is when it all starts to get silly, and trust Intel to start making something that is perfect sense become silly to look bigger.
AMD measure the NPU only, as do Nvidia, Apple, Qualcomm... its the only thing actually used for OS level AI.