*Gross capital expenditures refers to GAAP additions to property, plant, and equipment. Net capital spending, a non-GAAP financial measure, is defined as additions to property, plant, and equipment, net of proceeds from capital-related government incentives and partner contributions. See below for more information on and reconciliations of Intel's non-GAAP financial measures. |
|
**Not meaningful |
Business Unit Summary
Intel previously announced the implementation of an internal foundry operating model, which took effect in the first quarter of 2024 and created a foundry relationship between its Intel Products business (collectively CCG, DCAI and NEX) and its Intel Foundry business (including Foundry Technology Development, Foundry Manufacturing and Supply Chain, and Foundry Services (formerly IFS)). The foundry operating model is a key component of the company's strategy and is designed to reshape operational dynamics and drive greater transparency, accountability, and focus on costs and efficiency. The company also previously announced its intent to operate Altera® as a standalone business beginning in the first quarter of 2024. Altera was previously included in DCAI's segment results. As a result of these changes, the company modified its segment reporting in the first quarter of 2024 to align to this new operating model. All prior-period segment data has been retrospectively adjusted to reflect the way the company internally receives information and manages and monitors its operating segment performance starting in fiscal year 2024. There are no changes to Intel’s consolidated financial statements for any prior periods.
Business Unit Revenue and Trends |
Q2 2024 |
vs. Q2 2023 |
||
Intel Products: |
|
|
||
Client Computing Group (CCG) |
$7.4 billion |
up 9% |
||
Data Center and AI (DCAI) |
$3.0 billion |
down 3% |
||
Network and Edge (NEX) |
$1.3 billion |
down 1% |
||
Total Intel Products revenue |
$11.8 billion |
up 4% |
||
Intel Foundry |
$4.3 billion |
up 4% |
||
All other: |
|
|
||
Altera |
$361 million |
down 57% |
||
Mobileye |
$440 million |
down 3% |
||
Other |
$167 million |
up 43% |
||
Total all other revenue |
$968 million |
down 32% |
||
Intersegment eliminations |
$(4.3) billion |
|
||
Total net revenue |
$12.8 billion |
down 1% |
Intel Products Highlights
- CCG: Intel continues to define and drive the AI PC category, shipping more than 15 million AI PCs since December 2023, far more than all of Intel's competitors combined, and on track to ship more than 40 million AI PCs by year-end. Lunar Lake, the company’s next-generation AI CPU, achieved production release in July 2024, ahead of schedule, with shipments starting in the third quarter. Lunar Lake will power over 80 new Copilot+ PCs across more than 20 OEMs.
- DCAI: More than 130 million Intel® Xeon® processors power data centers around the world today, and at Computex Intel introduced its next-generation Intel® Xeon® 6 processor with Efficient-cores (E-cores), code-named Sierra Forest, marking the company’s first Intel 3 server product architected for high-density, scale-out workloads. Intel expects Intel® Xeon® 6 processors with Performance-cores (P-cores), code-named Granite Rapids, to begin shipping in the third quarter of 2024. The Intel ® Gaudi ® 3 AI accelerator is also on track to launch in the third quarter and is expected to deliver roughly two-times the performance per dollar on both inference and training versus the leading competitor.
- NEX: Intel announced an array of AI-optimized scale-out Ethernet solutions, including the Intel AI network interface card and foundry chiplets that will launch next year. New infrastructure processing unit (IPU) adaptors for the enterprise are now broadly available and supported by Dell Technologies, Red Hat and others. IPUs will play an increasingly important role in Intel’s accelerator portfolio, which the company expects will help drive AI data center growth and profitability in 2025 and beyond. Additionally, Intel and others announced the creation of the Ultra Accelerator Link, a new industry standard dedicated to advancing high-speed, low-latency communication for scale-up AI systems communication in data centers.