The hardware acceleration system 100 may be configured to implement an application such as an inference of a trained neural network, e.g., the hardware acceleration system 100 may be a FPGA based neural network accelerator system.
The host computer 101 and the hardware accelerator 102 are adapted to communicate data. This data communication may be done through a connection such as a PCIe bus or Ethernet connection. In another example, the hardware accelerator 102 may be part of the host computer 101. The hardware accelerator 102 and the host processor 103 may share the same package or the same die. In this case, the communication link between the hardware accelerator 102 and the host processor 103 may be any of the commonly used in-package or on-chip communication buses, e.g., AXI, Wishbone, etc. The hardware accelerator 102 may read input data from a global memory and perform the computation. The input data may be received via a network interface as a stream of network phits (the network interface may stream in and out fixed size data). The outputs may be written back to the global memory and may be sent as a stream of network phits via the network interface.