To train a neural network including the analog storage cell 200, the weight stored by the first capacitor 202 is updated according to an error in the output values. To update the weight, the charge of the first capacitor 202 is modulated according to a desired change in the weight value. For example, an error that dictates a higher weight can be implemented by increasing the charge stored in the first capacitor 202. Similarly, an error indicating that a lower weight is appropriate can be implemented by decreasing the charge of the first capacitor 202. As a result, the update circuit 300 can include components for modulating a voltage difference that moves current towards or away from the first capacitor 202. Thus, the update circuit 300 can include, e.g., a current source, one or more equipotential drains similar to the equipotential drain 204, as well as logic for redirecting current to produce the voltage differential.
According to aspects of the present invention, the update circuit 300 can include components that result in symmetry between positive weight updates and negative weight updates. For example, the update circuit 300 can include a second capacitor to transfer charge to or from the first capacitor. Thus, additional logic, including, e.g., transistors such as FETs, and gate logic circuits, such as, e.g., stochastic pulse circuits, can be implemented to direct charge from the second capacitor to the first capacitor 202 for a positive weight update, or from the first capacitor 202 to the second capacitor for a negative weight update. Thus, the second capacitor can be used to improve the symmetry of the weight updates to improve the performance and reduce the size of the analog storage device 200.