Microsoft launched Copilot+ PCs on June 18th. These are the first Windows machines that carry a mandatory 40 TOPS neural processing unit requirement. You need a Qualcomm Snapdragon X Elite or Intel/AMD equivalents with a dedicated NPU. Without the NPU, you cannot run the AI features that differentiate Copilot+ from a regular Windows 11 machine.
What NPUs actually do
A neural processing unit is purpose-built for the type of matrix multiplication that neural networks need. A CPU can do the same computation but uses far more power and takes much longer. A GPU is faster but is designed for parallel workloads and uses significant power. An NPU is optimised specifically for inference, running a trained model to produce an output, at low power consumption. That is what makes always-on AI features viable on a laptop battery.
The 40 TOPS requirement means the NPU can perform 40 trillion operations per second. For context, Phi-3 Mini, Microsoft's 3.8 billion parameter language model, can run in real time on 40 TOPS hardware. That is enough to power co-pilots, live captions, image generation, and smart features without a cloud call.
The Recall controversy
Microsoft's marquee Copilot+ feature, Recall, was pulled from launch after security researchers demonstrated that the local database of screenshots it builds was accessible to any process running as the current user, unencrypted. Microsoft delayed it to fix the security architecture. Recall is now scheduled for later this year as an opt-in feature with encryption and biometric protection.
The controversy obscured the real story. Recall is one feature. The rest of the Copilot+ capabilities, live captions with real-time translation, Cocreator in Paint for AI image generation, enhanced Windows Search with semantic understanding, are shipping and working. The underlying hardware platform is sound.
The bigger bet
Microsoft is making the same bet as Apple with Apple Intelligence and Google with Tensor chips: that the next era of computing features runs locally, not in the cloud. Cloud AI is expensive at scale and introduces latency and privacy considerations that on-device models avoid. Getting NPUs into the mainstream PC fleet now means that by the time the software ecosystem matures, the hardware is already there.
For enterprise developers and IT administrators, Copilot+ PCs also introduce a new hardware tier to manage and a new set of Group Policy controls for AI features. That is worth understanding before the hardware refresh cycle arrives.