Dresden 2020 – scientific program
The DPG Spring Meeting in Dresden had to be cancelled! Read more ...
SYNC 1.5: Invited Talk
Monday, March 16, 2020, 11:45–12:15, HSZ 01
Beyond von Neumann systems: Computational memory for efficient AI — •Irem Boybat — IBM Research - Zurich, Switzerland
The highly data-intensive AI applications call for innovations in computing architectures as the performance of conventional computing systems with separate processing and memory units are limited by data access and transfer. For example, training deep neural network models with millions of tunable weights takes days or even weeks using relatively powerful heterogeneous systems, and it consumes more than hundreds of kilowatts of power. In-memory computing is a promising avenue to accelerate AI workloads because computations take place within the memory itself, eliminating the need to move the data around. A new class of emerging memory devices - the so-called memristive devices - is gaining significant interest for in-memory acceleration owing to their scalability, non-volatility and fast access time. Arrays of memristive devices can be used to perform computationally expensive operations in place by exploiting their physical attributes. Deep learning training and inference can be realized with high area and energy efficiency with cascaded arrays of memristive devices. Moreover, memristive devices can also serve as the neuronal and synaptic compute primitives for the next generation of neural networks such as spiking neural networks.