Farewell to the 'Performance Black Box': Windows 11 Task Manager Officially Supports NPU Monitoring

marsbitОпубліковано о 2026-03-31Востаннє оновлено о 2026-03-31

Анотація

Microsoft has introduced a significant update to the Task Manager in a recent Windows 11 development build, adding real-time monitoring for Neural Processing Units (NPU). This enhancement addresses a key gap in hardware performance visibility, particularly as NPUs become central to AI PCs. Users can now track NPU utilization across processes, users, and details tabs in the latest preview version (26300.8142). The update also provides insights into dedicated and shared memory usage by applications. Additionally, the performance page displays detailed activity for neural engines integrated within GPUs, offering comprehensive oversight of all AI-related operations. This upgrade integrates NPUs into the mainstream hardware performance evaluation system, providing developers and power users with essential data for optimizing AI application efficiency and power consumption. The feature is currently available to Windows Insiders, with a broader rollout expected in the future, signaling deeper integration of AI hardware into the Windows ecosystem.

Microsoft recently introduced a major upgrade to the Task Manager in a Windows 11 development build update. This update finally completes the last piece of the hardware monitoring puzzle by officially adding real-time monitoring capabilities for the Neural Processing Unit (NPU).

With the popularization of the AI PC concept, the NPU has become the core hardware for handling artificial intelligence tasks. Previously, users were unable to visually check the operating status of the NPU, but this upgrade makes the load conditions of this dedicated chip transparent and visible.

In the latest preview build 26300.8142, the Task Manager's Processes, Users, and Details pages have all added an NPU column. Users can not only view the real-time NPU utilization but also gain in-depth insights into the specific consumption of dedicated memory and shared memory by various applications.

Furthermore, Microsoft has refined the monitoring dimensions; the Performance page can now fully showcase the neural network engine activity built into the graphics card. This means that all AI-related operational traces within the system will be comprehensively captured by the Task Manager.

This evolution of the Task Manager marks the NPU's official entry into the mainstream hardware performance evaluation system. For developers and tech-savvy users, this is not just an increase in data but also a foundation for precisely tuning the energy consumption and efficiency of AI applications.

Currently, this feature is primarily available to Windows Insiders in preview builds, and a wider rollout will take some time. It is foreseeable that with the普及 of NPU monitoring, future Windows systems will undergo a deeper experience reconstruction centered around the AI hardware ecosystem.

Пов'язані питання

QWhat major update did Microsoft recently introduce to the Task Manager in Windows 11?

AMicrosoft introduced real-time monitoring for the Neural Processing Unit (NPU) in the Task Manager.

QWhy is the NPU monitoring feature significant for AI PC users?

AIt allows users to visually check the workload and status of the NPU, which is crucial for handling AI tasks, making the specialized chip's operations transparent.

QWhich Windows 11 preview build version includes the NPU monitoring capability?

AThe feature is available in the preview build version 26300.8142.

QWhat specific details can users monitor regarding NPU usage in the Task Manager?

AUsers can view real-time NPU utilization and see how programs consume dedicated and shared memory.

QWho currently has access to the NPU monitoring feature in Windows 11?

AThe feature is currently available to Windows Insiders in the preview version, with a wider rollout expected later.

Пов'язані матеріали

From Theft to Re-entry: How Was $292 Million "Laundered"?

A sophisticated crypto laundering operation was executed following the $292 million hack of Kelp DAO on April 18. The attack, attributed to the North Korean Lazarus group, began with anonymous infrastructure preparation using Tornado Cash to fund wallets untraceably. The hacker exploited a vulnerability in Kelp’s cross-chain bridge, stealing 116,500 rsETH. To avoid crashing the market, the attacker used Aave and Compound as laundering tools—depositing the stolen rsETH as collateral to borrow $190 million in clean, liquid ETH. This move triggered a bank run on Aave, causing an $8 billion drop in TVL. After consolidating funds, the attacker fragmented them across hundreds of wallets to evade detection. A major breakpoint was THORChain, where over $460 million in volume—30 times its usual activity—was processed in 24 hours, converting ETH into Bitcoin. This shift to Bitcoin’s UTXO model exponentially increased tracing complexity by shattering funds into countless untraceable fragments. The final destination was Tron-based USDT, the primary channel for illicit crypto flows. From there, funds were cashed out via OTC brokers in China and Southeast Asia, using unlicensed underground banks and UnionPay networks outside Western sanctions scope. Ultimately, the laundered money supports North Korea’s weapons programs, which rely heavily on crypto hacking for foreign currency. The incident underscores structural challenges in DeFi: its openness, composability, and lack of central control make such laundering not just possible, but inherently difficult to prevent.

marsbit1 год тому

From Theft to Re-entry: How Was $292 Million "Laundered"?

marsbit1 год тому

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbit7 год тому

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbit7 год тому

Торгівля

Спот
Ф'ючерси
活动图片