Skip to content Skip to navigation

Sidney Tsai

Analog Memory-Based Accelerators for Deep Learning

Sidney Tsai, IBM Almaden Research Center

Abstract: 
Crossbar arrays of resistive non-volatile memories (NVMs) offer a novel and innovative solution for deep learning tasks which are typically implemented on GPUs [1]. The highly parallel structure employed in these architectures enables fast and energy-efficient multiply-accumulate computations, which is the workhorse of most deep learning algorithms. More specifically, we are developing analog hardware platforms for acceleration of large Fully Connected (FC) Deep Neural Networks (DNNs), where training is performed using the backpropagation algorithm. 
 
In this presentation, we will focus on hardware acceleration of large Fully Connected (FC) DNNs in phase change memory (PCM) devices. PCM device conductance can be modulated between the fully crystalline, low conductance, state and the fully amorphous state by applying voltage pulses to gradually increase the crystalline volume. This characteristic is crucial for memory-based AI hardware acceleration because synaptic weights can then be encoded in an analog fashion and be updated gradually during training. Vector matrix multiplication can then be done by applying voltage pulses at one end of a memory crossbar array and accumulating charge at the other end. By designing the analog memory unit cell with a pair of PCM devices as the more significant weights and another pair of memory devices as the less significant weights, we achieved classification accuracies equivalent to a full software implementation for the MNIST handwritten digit recognition dataset. The improved accuracy is a result of larger dynamic range, more accurate closed loop tuning of the more significant weights, better linearity and variation mitigation of the less significant weight update. We will discuss what this new design means for analog memory device requirements and how this generalizes to other deep learning problems.
 
Biography:
HsinYu (Sidney) Tsai received her Ph.D. from the Electrical Engineering and Computer Science department at Massachusetts Institute of Technology in 2011. Her main research activities in Prof. Henry I. Smith’s group were on super-resolution optical lithography and imaging.
Since joining IBM as a research staff member in 2011, Sidney held several roles on lithography research for advanced technologies, including leading a project on sub-30nm pitch fin patterning using directed self-assembly and managing a group that supports operations of a 200mm research prototyping line. More recently, Sidney works on applying Phase Change Memory (PCM)-based devices for Deep Neural Network acceleration, studying aspects ranging from device characteristic, peripheral circuit design, network simulation, to system architecture. The work is a collaboration among various IBM Research sites, including IBM Almaden, TJ Watson, Zurich, and Tokyo research Labs.