
Intel unveiled its highly anticipated Intel Core Ultra processors, code-named Meteor Lake at the Intel Innovations 2023 in San Jose, California. This release introduces Intel's inaugural integrated neural processing unit (NPU) designed to turbocharge AI capabilities on PCs while preserving data privacy. Intel has officially confirmed that the Core Ultra processors will hit the market on December 14.
The forthcoming Core Ultra processors are poised to usher in a new era of AI-powered personal computing. These chips are engineered to provide low-latency AI computing capabilities, regardless of connectivity, offering enhanced data privacy features that are increasingly vital in today's digital landscape.
One of the most significant innovations is the integration of an NPU directly into the client silicon. The NPU is tailored to offer entirely new and transformative PC experiences. It is particularly well-suited for workloads previously reliant on the CPU, either for enhanced quality or efficiency and for tasks that would traditionally be offloaded to the cloud due to the dearth of efficient client-side computing resources.
Intel's Core Ultra represents a pivotal moment in the evolution of client processors, with its unique client chiplet design enabled by Foveros packaging technology. Beyond the introduction of the NPU, these processors incorporate substantial advancements in power-efficient performance, courtesy of Intel 4 process technology. Additionally, the Core Ultra processors boast discrete-level graphics performance thanks to the inclusion of onboard Intel Arc graphics.
What is an NPU?
An NPU, or Neural Processing Unit, is a specialised hardware component designed to accelerate and execute artificial intelligence (AI) and machine learning (ML) tasks efficiently. It is distinct from general-purpose processors like central processing units (CPUs) and graphics processing units (GPUs) because it is purpose-built for handling neural network computations.
NPUs are optimised for the specific types of mathematical calculations involved in neural networks, such as matrix multiplications and convolutions. These operations are fundamental to deep learning algorithms, which are commonly used in tasks like image and speech recognition, natural language processing, and recommendation systems.
The key characteristics and advantages of NPUs include:
1. Efficiency: NPUs are highly efficient at executing AI workloads, often consuming less power compared to CPUs or GPUs for similar tasks. This efficiency is particularly valuable in battery-powered devices like smartphones and laptops.
2. Low Latency: NPUs are designed for low-latency processing, making them suitable for real-time and time-sensitive AI applications, such as autonomous driving and robotics.
3. Specialisation: Unlike general-purpose processors, NPUs are specialised for AI tasks. This specialisation allows them to deliver better performance and energy efficiency for neural network computations.
4. Parallelism: NPUs are equipped with multiple processing cores or units that can handle parallel computations, a critical requirement for neural network training and inference.
5. Inference Acceleration: NPUs are commonly used for AI inference, where pre-trained models make predictions based on input data. They accelerate these inference tasks, enabling faster and more responsive AI applications.
NPUs are often integrated into various types of devices, ranging from smartphones and smart home appliances to data center servers and edge computing devices. They play a crucial role in democratising AI by making it more accessible and efficient across a wide range of applications.
Also Read
Battle of the billionaires: Elon Musk vs Mark Zuckerberg cage match could make over $1 billion
Chandrayaan-3 poised to unlock future energy source on the moon; know all about it
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Copyright©2025 Living Media India Limited. For reprint rights: Syndications Today