How to bridge the energy/bandwidth wall in DRAM centric AI architectures
Norbert Wehn (RPTU Kaiserslautern, D)
Abstract
Emerging applications such as Deep Neural Networks are data driven and memory intensive. Hence, there is a recent shift from compute centric architectures to memory centric architectures. Processing-in-Memory (PIM) is a new promising memory-centric compute paradigm. Recently, we are witnessing a surge in DRAM-PIM publications from both academia and industries. The underlying DRAM-PIM architectures range from modifying the highly optimized DRAM sub-arrays to enable computations to straightforward integration of the computation units in the DRAM/IO region. In this talk we give an overview on DRAM-PIM architectures, highlight challenges and present novel architectures and compare DRAM-PIM with PIM architectures that are based on emerging FeFET devices.
Curriculum Vitae
Norbert Wehn holds the chair for Microelectronic System Design in the department of Electrical Engineering and Information Technology at the University of Kaiserslautern. He has more than 450 publications in various fields of microelectronic system design and holds several patents. His special research interests are VLSI-architectures for mobile communication, forward error correction techniques, low-power techniques, advanced SoC and memory architectures, postquantum cryptography, reliability challenges in SoC, machine learning, IoT and smart learning environments.