In scenarios having multiple sound sources, such as when a group of speakers are engaged in conversation, e.g. around a table, the sound source location or active speaker position in relation to the microphone array changes. In addition, more than one speaker may speak at a given time, producing a significant amount of simultaneous speech from different speakers in different directions relative to the array. Furthermore, more than one sound source may be located in the same general direction relative to the array and therefore cannot be discriminated solely using direction of arrival techniques, such as microphone array beamforming. In such a complex environment, the effective acquisition of target sound sources requires simultaneous beamforming in multiple directions in the reception space around the microphone array to execute the aforementioned data-adaptive technique. This requires fast and accurate processing techniques to enable the sound source location and robust beamforming techniques to mitigate the deleterious effects listed above. Even with an ideal implementation, if sound sources lie in the same direction relative to the array, these techniques will not suffice to discriminate between the sources, and real-world implementations still fall far short of the ideal.