The present invention may speed up the computations performed by the hardware accelerators. For that, the present invention involves a speculation of precision to deliver the results faster when the precision of the partial product allows it with no compromise in accuracy. The speculation of precision means that a computation unit reads only a number of least significant bits (LSB) of the input data and thus speculating that the ignored most significant bits (MSB) are 0. The present invention may thus provide parallel processing units with different precision capabilities. The selected computation unit may have a lower number of bits which means a smaller memory footprint, more efficient arithmetic units, lower latency, and higher memory bandwidth. The benefit of using a reduced precision format may lie in the efficiency of the multiply and accumulate operations, e.g., in deep learning inference or training. The hardware accelerator may, thus, enable a competitive inference system for a fast and efficient matrix multiplier.
According to one embodiment, the input data is received simultaneously at the set of computation units and the controller. This embodiment causes the computation units to (speculatively) start the execution at the same time. In parallel, the controller can decide, or select, which of the computation units can deliver the results faster when the precision of the partial product allows it with no compromise in accuracy.