It should be noted that the assigning of sub-channels to features may be different for different sensors. For example, one feature of the observed subject may be well detected by a first sensor, but poorly detected by a second sensor. Accordingly, the quality and importance of that feature may differ between the two sensors. The first sensor may thus assign a robust sub-channel for transmission of that feature, but the second sensor may assign a less robust sub-channel for transmission of the same feature. Each sensor may transmit a respective control message or header to the receiver to inform the receiver about placement of the feature on the different sub-channels.
The above description discloses a machine-learning based approach for designing an encoder DNN and decoder DNN, which is able to account for the effects of the channel, and does not require knowledge about the raw information. The encoder and decoder are both probabilistic, meaning that they encode/decode probabilistic distributions rather than any particular sample from the raw information. The information representation scheme and transmission scheme are selected based on features extracted from the raw information, where the features represent probability distributions. For example, the features may represent Gaussian distributions (or Bernoulli distributions). The transmitted features may be quantized expectation values representing the distributions, and the transmission schemes used for transmission of respective features may be L1 configurations corresponding to noise variance values that match the variance values of the respective features.