Abstract
The AI (Artificial Intelligence) wave is shifting from semiconductors to passive component MLCCs.
AI servers utilize over 10–15 times the number of MLCCs in comparison to general-purpose servers. Not only are they driving quantitative growth, but also expanding demands into ultra high capacitance and high voltage MLCCs which require advanced technologies.
This technological change and its future direction are analyzed with three frames: Computing, Power, and Network.
1. Computing board: integration of semiconductors and increase of high-capacitance MLCCs
GPUs and CPUs, which serve as the brain of AI, consume thousands of amperes of current at a low voltage of 0.8V. The total capacitance of the MLCC is increased in order to ensure a stable power supply for GPU power Dynamics.
- Technology Shift:
MLCCs placed near high-performance computing board GPUs and CPUs play a decoupling role to mitigate rapid current changes. As the performance of chips advance, the mounting area is reduced, whilst the required capacitance grows. Thus, “ultra-small high-capacitance” technology is the epicenter- which includes achieving over 47㎌ within 0402 inch size or 100㎌ in 0603 inch size.
- Direction of Growth:
The capacitance of MLCCs which undergo the SMT process near GPU balls will increase in an accelerated way. The technology of embedded MLCCs and landside MLCCs that are mounted within the semiconductor package or right underneath it will advance to drastically reduce loop inductance while increasing capacitance density.
AI & Servers
2. Power Supply & VPD (vertical power delivery): The Evolution to 48V server power and implementation of VPD in GPU core supply
Power efficiency is the most critical factor in determining the operating costs of AI data centers.
For a stable power delivery in 120kW-class racks, the following are crucial: PSUs of high-efficiency and the supply of high-specification passive components. In the past, AC was directly lowered to 12V/48V, but future 120kW racks will rectify AC to 800V high-voltage direct current with the purpose of minimizing transmission losses and supply it within the rack. The demand for 100V MLCCs will grow for a stable supply of 48V. The same is expected regarding the demand for large-size 1kV–2kV MLCCs.
In response to GPU load current dynamics, large core power supply currents are necessary. VPD (Vertical Power Delivery) technology is a form of Power Module that not only shortens the power path as much as possible, but also increases power density. The power density per unit area is linked to the capacitance density, with X7T 0402 inch 22㎌, X6S 47㎌ 2.5V MLCCs under active evaluation.
800V System
Embedded MLCC
3. The Technological Evolution of AI Networks
As AI models grow larger, the speed of data synchronization between GPUs becomes increasingly important. Network trays which connect racks are evolving from being simple “data pathways” into key infrastructure that resolves overall system bottlenecks.
The currently dominant 800G networks are entering the 1.6T (Tera) era. CPO (Co-packaged Optics) technology is being implemented here. The existing plug-in optical modules consume a great amount of power during high-speed transmissions. There are also a significant amount of lost signals during the process. To address this, CPO technology, which mounts optical engines on the same package as switch ASIC semiconductors, is establishing itself as the standard for network trays.
With network switch chipsets consuming over 500W of power, power supply and cooling have become critical for network trays. There is a growing demand for high-temperature MLCCs (X5R → X6S, X6S → X7T).
Please click here for sample requests or any other inquiries.
USA (semai.newsletter@samsung.com)
EU (semcoeurope@samsung.com)
SEA (sempl.automlcc@samsung.com)