Referring back to FIG. 1, the computing environment 100 also includes a server pool 139. The server pool 139 is a collection of at least two servers 130A-D but could comprise any number of servers greater than two. The server pool 139 may be dedicated to one load balancer 120, or may be shared between multiple load balancers 120. As described above with reference to FIG. 4, the servers 130A-D of the server pool 139 receive TCP requests routed to them from clients 110 through the load balancer 120. If a server 103A is at capacity, the server 103A responds to a subsequent TCP request by sending a rejection notification 404 back to the load balancer 120 that sent the TCP request. Thus, the systems and methods of some embodiments described herein enable TCP requests from multiple load balancers 120 to be distributed among a shared server pool 139 in a method that reduces or optimizes response time without any of the multiple load balancers 120 having to maintain information regarding the workload of the servers among the server pool 139.