Another question is how the AP can learn about EVM degradation at the receiver, since this is not something that can be directly observed. Note however that the receiver has a direct view of the level of interference suppression. The current channel estimation protocol in e.g. IEEE 802.11ac allows each receiver to estimate the channels between each stream to its receive antennas, including channels that correspond to streams intended for other users. If (2) is true, the estimated channels for these “other” streams should be zero. If the precoding is no longer perfectly orthogonal to the actual channel, the receiver can explicitly estimate the values of HiQj. If each receiver provides an indication of these values to the transmitter, the AP can effectively determine the right time to sound based on a more accurate picture of how the precoding is performing. Note that rate adaptation alone may not be able to provide such an accurate picture, since even the performance under “ideal” precoding is a function of time. Simply observing a degradation in performance is not necessarily an indication that the precoding is no longer adequate. The feedback from the receiver could take a number of different forms: We might consider full channel information of HiQj. The information can quantized in some form, since we expect the number to be relatively small. The information could be provided on a subset of the tones, since we expect similar behavior on adjacent tones. The information could simply exist of a binary indication that some threshold has been exceeded. Each of these would provide useful guidance to the AP in determining the right time to refresh channel information. It would avoid situations where the sounding is either too fast or too slow and thereby minimize protocol overhead, leading to more optimal throughput numbers.