白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Methods and apparatuses for feature-driven machine-to-machine communications

專利號(hào)
US10785681B1
公開日期
2020-09-22
申請(qǐng)人
Yiqun Ge; Wuxian Shi; Wen Tong; Qifan Zhang(CN Shenzhen)
發(fā)明人
Yiqun Ge; Wuxian Shi; Wen Tong; Qifan Zhang
IPC分類
H04L29/06; H04W28/06; G06K9/62; H04W4/70; G06N3/04; G06N3/08; H04W84/04
技術(shù)領(lǐng)域
may,dnn,arrow,bs,encoder,be,decoder,sensors,feature,sensor
地域: Ottawa

摘要

Methods and apparatuses for feature-driven machine-to-machine communications are described. At a feature encoder, features are extracted from sensed raw information, to generate features that compress the raw information by a compression ratio. The feature encoder implements a probabilistic encoder to generate the features, each feature providing information about a respective probability distribution that each represents one or more aspects of the subject. The probabilistic encoder is designed to provide a compression ratio that satisfies a predetermined physical channel capacity limit for a transmission channel. The features are transmitted over the transmission channel.

說明書

A probability distribution defines a tolerable range of samples. A slight change in the observed subject may cause a change in the raw information observed by a sensor, but may still fall within the probability distribution. For example, the probability distribution may be common information shared between an encoder and a decoder. If samples x1, x2 and x3 fall within the probability distribution defined by the common information, the encoder may determine that there is no change to the probability distribution and thus no feature needs to be encoded and transmitted. On the other hand, if samples x4 and x5 fall outside of the probability distribution, the encoder encodes these samples for transmission. The encoded features may be an update of the distribution (e.g., a new expectation value and new variance, calculated based on the samples x4 and x5) and the decoder may use this information to update the probability distribution.

Using common information in the manner may enable transmission of information that is more robust (e.g., against a noisy and hostile channel) than transmitting every sample. The Shannon capacity limit theory assumes that two data blocks or even every single bit in one data block, are independently distributed. Therefore, the Shannon capacity limit does not take into account the possibility of structural and/or logical relevance among the information (e.g., correlation of information along the time axis) and among multiple encoders related to the same information source. In examples discussed herein, by selectively transmitting some features and not transmitting others, the channel efficiency would be improved.

權(quán)利要求

1
微信群二維碼
意見反饋