According to aspects of the present invention, a capacitor can be used in an analog storage cell for analog storage of weights in hardware accelerated neural networks. The capacitor of the analog storage cell can be used to store a charge to represent a value of a weight. Current sources can be used to update the weight value of the capacitor. However, symmetric weight updates are difficult without large current sources, which reduce scalability. Current source based design can also suffer from device-to-device variation which further degrades the performance.
Therefore, a second capacitor is included without a large current source by transferring charge towards and away from the first capacitor. By using the same second capacitor to provide both positive and negative weight updates, the analog storage cell can be reliably manufactured with lower device-to-device variation. As a result, the analog storage cell has improved scalability and reliability, while also having symmetrical weight updates. Thus, performance of the analog storage cell is improved.
Exemplary applications/uses to which the present invention can be applied include, but are not limited to: resistive processing units for analog storage and training of weights for use in matrix operations performed by systems for neural networks, including deep neural networks, such as, e.g., convolutional neural networks, recurrent neural networks, or other neural networks.
It is to be understood that the present invention will be described in terms of a given illustrative architecture; however, other architectures, structures, substrate materials and process features and steps may be varied within the scope of the present invention.