Michel Khoury Michel Khoury

Heterogeneous Integration: the backbone of next-gen AI chip design

It all begins with an idea.

Artificial intelligence is reshaping the world, and the chips powering it are getting a serious upgrade thanks to heterogeneous integration. This cutting-edge approach combines different semiconductor materials and chiplets like CPUs, GPUs, and memory into a single, high-performance package, tailored for AI’s demands. Unlike traditional chip designs, heterogeneous integration is breaking barriers, boosting efficiency, and paving the way for smarter, faster AI systems. With companies like Intel, NVIDIA, AMD, TSMC, and others leading the charge, let’s unpack why this tech is the backbone of next-gen AI chips, explore its market momentum and revisit some pivotal moments that got us here.

For years, chipmakers relied on monolithic silicon designs, cramming everything onto one die. But as Moore’s Law slows, that approach struggles to keep up with AI’s need for speed, power, and specialization. Heterogeneous integration flips the script by mixing and matching chiplets—think of it like assembling a dream team of components. Each chiplet, whether it’s Intel’s compute-focused tile, NVIDIA’s GPU powerhouse, or Micron’s high-bandwidth memory, is optimized for its role and connected via advanced interconnects like TSMC’s 3D stacking or Intel’s EMIB (Embedded Multi-die Interconnect Bridge). This modularity boosts performance by 30-50% over traditional designs while cutting power use and costs. It also lets companies mix process nodes—say, 5nm for logic and 7nm for memory—maximizing efficiency. As NVIDIA’s CEO Jensen Huang put it at GTC 2024, “Heterogeneous computing is the only way to scale AI beyond today’s limits.”

AI workloads, from training massive language models to running real-time inference, demand chips that juggle compute, memory, and I/O seamlessly. Heterogeneous integration delivers. Intel’s Ponte Vecchio, built with 47 chiplets, blends compute tiles and HBM3 memory for exascale AI performance in data centers. NVIDIA’s Grace CPU Superchip uses chiplet-to-chiplet interconnects to pair high-performance Arm cores with LPDDR5X, slashing latency for AI training. AMD’s Instinct MI300X accelerator integrates Zen 4 CPU cores, CDNA 3 GPU cores, and 141GB of HBM3, offering 2.4x the AI throughput of its predecessors. TSMC’s CoWoS (Chip-on-Wafer-on-Substrate) packaging stitches these complex designs together, while GlobalFoundries and Samsung push 2.5D and 3D stacking to handle the heat and bandwidth. These chips power everything from generative AI to autonomous systems, with interconnect bandwidths hitting 1 TB/s—orders of magnitude beyond monolithic designs.

Heterogeneous integration is big business, and it’s growing fast. Yole Development pegs the advanced packaging market, which includes heterogeneous solutions, at $44 billion in 2023, with a projected climb to $68 billion by 2028, driven by AI and data center demand. The chiplet market alone is expected to hit $20 billion by 2027, with a 30% CAGR. Intel’s $1 billion investment in its Advanced Packaging Hub, TSMC’s $30 billion in 3D IC capacity, and AMD’s $4 billion pivot to chiplet-based EPYC CPUs signal the industry’s all-in approach. NVIDIA’s partnerships with TSMC for Blackwell GPUs and GlobalFoundries’ role in multi-chip modules further fuel the boom. Yole’s Stefan Chitoraga notes, “AI’s complexity is pushing heterogeneous integration from niche to necessity, with major players doubling down.”

The roots of heterogeneous integration trace back to the 1980s, when multi-chip modules (MCMs) first combined discrete chips. But the real spark came in 2011, when Xilinx (now part of AMD) unveiled its Virtex-7 FPGA, using 2.5D integration to link four dies on a silicon interposer—a first for commercial chips. This breakthrough slashed costs and boosted performance, catching the industry’s eye. Another game-changer was Intel’s 2015 launch of EMIB, which enabled compact, high-speed die-to-die connections without bulky interposers. By 2018, TSMC’s CoWoS platform powered NVIDIA’s Volta GPUs, proving 3D stacking could handle AI’s scale. These moments laid the groundwork for today’s chiplet-driven AI chips, turning a bold idea into reality.

Heterogeneous integration isn’t perfect. Thermal management is a headache—stacking dies can create hotspots, requiring advanced cooling like microfluidic channels or diamond substrates. Interconnect reliability, especially at sub-10μm pitches, demands precision, and design tools lag behind, slowing adoption. Yet, solutions are emerging. Intel’s Foveros 3D stacking now supports hybrid bonding, cutting power by 20%. NVIDIA and TSMC are pioneering chiplet-specific EDA tools, while AMD’s open-source chiplet standards aim to unify the ecosystem. Looking forward, heterogeneous integration will drive AI chips toward zettascale computing by 2030, blending quantum accelerators, neuromorphic cores, and optical I/O. With GlobalFoundries scaling silicon bridges and Samsung pushing fan-out packaging, the future is stacked—literally.


References:

  1. McKinsey Electronics. (2025). From Silicon Wafers to AI-Optimized Chips: T

  2. Yole Group, “Advanced Packaging Market Report,” 2024.

  3. Intel, “Advanced Packaging Hub Announcement,” 2023.

  4. NVIDIA, “Grace CPU Superchip Architecture,” 2024.

  5. AMD, “Instinct MI300X Technical Brief,” 2024.

  6. TSMC, “3D IC and CoWoS Update,” 2024.

  7. Intel, “Ponte Vecchio: Architecture and Performance,” 2023.

  8. GlobalFoundries, “Advanced Packaging Solutions for AI,” 2024.

  9. Samsung Electronics, “2.5D/3D Packaging Roadmap,” 2023.

  10. Xilinx/AMD, “Virtex-7 FPGA: A 2.5D Pioneer,” 2011.

  11. SemiEngineering, “Heterogeneous Integration: Challenges and Opportunities,” 2024.

Read More
Michel Khoury Michel Khoury

Gallium Nitride: powering AI data centers with precision

It all begins with an idea.

As artificial intelligence drives data centers to new heights, the demand for efficient power delivery is skyrocketing. Gallium nitride (GaN) power devices, particularly GaN High Electron Mobility Transistors (HEMTs), are rising to the challenge, offering unmatched performance over traditional silicon architectures. Companies like Infineon and Navitas are at the forefront, leveraging GaN’s strengths to fuel AI’s growth. Here’s a look at why GaN HEMTs are revolutionizing data centers, their booming market per Yole Development and a few historical milestones.

Silicon MOSFETs have long powered electronics, but their performance is plateauing under AI’s energy demands. GaN HEMTs, built on a heterostructure of GaN and AlGaN, create a two-dimensional electron gas (2DEG) with electron mobility up to 2000 cm²/V·s—over ten times higher than silicon’s ~150 cm²/V·s. This high mobility enables GaN HEMTs to switch at frequencies exceeding 1 MHz with minimal losses, compared to silicon’s typical 100 kHz limit. Lower on-resistance (often <50 mΩ vs. silicon’s >200 mΩ) and reduced gate charge further cut energy waste, shrinking power supplies and cooling needs. As Navitas Semiconductor’s CEO Gene Sheridan recently noted, “Most data centers can’t handle next-gen AI GPUs like NVIDIA’s Blackwell without GaN’s efficiency.”

AI data centers rely on high-performance GPUs and servers that demand robust power systems. Infineon’s CoolGaN™ HEMTs excel in 48V DC-DC converters and server power factor correction, achieving efficiencies above 98%. Navitas’ GaNFast™ ICs, integrating GaN HEMTs, enable compact 8.5 kW power supplies for AI servers, reducing size while boosting reliability. With data centers consuming 3% of global electricity, GaN’s low-loss switching helps slash operational costs and environmental impact.

The GaN market is soaring, fueled by AI’s expansion. Yole Development reports power GaN revenues reached $1 billion in 2023, up 41%, and are projected to hit $2 billion by 2027, with a 46% annual growth rate through 2029. Data centers drive much of this demand. Infineon’s $830 million acquisition of GaN Systems and Navitas’ collaborations, like with Great Wall for AI power architectures, highlight GaN’s momentum. Yole’s Taha Ayari forecasts GaN dominating high-power applications, with further market gains ahead.

GaN’s journey began in the 1990s with blue LEDs, earning a 2014 Nobel Prize. Its power applications took off later. In 2010, Efficient Power Conversion (EPC) showcased GaN HEMTs in compact LiDAR systems, proving their potential. A defining moment came in 2015 with Google’s “Little Box Challenge,” where a GaN-based 2 kW inverter, the size of a laptop, delivered triple the power density of silicon systems, setting the stage for GaN’s data center dominance.

GaN HEMTs’ high mobility, fast switching, and efficiency make them perfect for AI’s power-hungry workloads. While silicon struggles with higher losses and bulkier designs, GaN thrives. Infineon’s 300 mm GaN wafer technology aims to lower costs, and Navitas’ integrated HEMT designs simplify adoption. As AI pushes boundaries, GaN will drive data centers toward greater performance and sustainability.

References

  1. Middleton, C. (2023). Gallium nitride and silicon carbide to be essential for enabling scale and potential of AI. Semiconductor Today.

  2. Navitas Semiconductor. (2021). Gallium Nitride semiconductors: The Next Generation of Power. Navitas.

  3. Infineon Technologies. (2021). GaN transistors (GaN HEMTs) - CoolGaN™ Transistors. Infineon.

  4. Grand View Research. (2023). Gallium Nitride Semiconductor Devices Market Report, 2030. Grand View Research.

  5. CSIS. (2024). Gallium Nitride: A Strategic Opportunity for the Semiconductor Industry. CSIS.

  6. MDPI. (2024). Gallium Nitride Power Devices in Power Electronics Applications: State of Art and Perspectives. MDPI.

  7. EPC. (2023). Where is GaN Going? Gallium Nitride Market, Applications & Future. EPC.

  8. Altium Resources. (2024). Growth Prospects for GaN and SiC Semiconductors. Octopart.

  9. Navitas Semiconductor. (2024). Navitas Showcases Breakthroughs in GaN and SiC Technologies for AI Data Centers. Navitas.

  10. Power Electronics News. (2023). GaN and SiC: The Future of Power Electronics. Power Electronics News.

  11. Semiengineering. (2019). Wide Band Gap—The Revolution In Power Semiconductors. Semiengineering.

  12. Cadence. (2024). Gallium Nitride vs. Silicon. Advanced PCB Design Blog

  13. Yole Group, “Power GaN: Harnessing New Horizons,” 2024.

  14. Navitas Semiconductor, “World’s First 8.5 kW GaN PSU,” 2025.

  15. Infineon Technologies, “300 mm GaN Wafer Technology,” 2024.

  16. CSIS, “Gallium Nitride: A Strategic Opportunity,” 2024.

  17. Semiconductor Today, “GaN Con - Power GaN,” 2019.

Read More