A local queue 132 may be maintained for each port of the host bus adapter 114, where the local queue 132 maintains the I/O requests that are waiting for being processed by the port. A global queue 134 of I/O requests that are waiting for being processed by any of the ports of the host bus adapter 114 is also maintained.
Therefore, FIG. 1 illustrates certain embodiments in which a machine learning module 106 is used by a load balancing application 104 to determine the optimal allocation of TCBs 122, 124, 126, 128 to ports 116, 118 of the storage controller 102.
FIG. 2 illustrates a block diagram 200 that shows additional elements in the storage controller 102 whose optimal allocation of resources are determined by the machine learning module 106, in accordance with certain embodiments.
The storage controller 102 is coupled to a plurality of hosts 202, 204 (corresponds to the hosts 108 shown in FIG. 1) and a plurality of storage devices 110, 112. The storage controller 102 has two servers 206, 208, which are referred to as central processor complexes (CPC). The CPC is also known as the processor complex or the internal server. Both servers 206, 208 share the system workload of the storage controller 102. The servers 206, 208 are redundant, and either server can fail over to the other server if a failure occurs, or for scheduled maintenance or upgrade tasks.