As described above, in some embodiments coarsely quantized signals are needed. In order to accomplish this the bottleneck layer (which, after training, will be what drives the haptic actuators) will have units that have coarsely quantized outputs, e.g., binary or trinary. In other embodiments the hidden layer will use a floating point representation with the output of that layer appropriately transformed to generate haptic gestures that are distinguishable by the user.
In one embodiment, the machine learning circuit 242 may transmit the haptic illusion signals 202 to the receiving device 268 if a likelihood that the haptic illusion signals 202 correspond to the signals 216, 256 exceeds a threshold. The likelihood may be indicative of a probability that the features 408 have a particular Boolean property or an estimated value of a scalar property. As part of the training of the machine learning model 242, the process may form a training set of features 408, touch signatures, and speech subcomponents by identifying a positive training set of features that have been determined to have the property in question, and, in some embodiments, forms a negative training set of features that lack the property in question. In one embodiment, the machine learning training applies dimensionality reduction (e.g., via linear discriminant analysis (LDA), principle component analysis (PCA), or the like) to reduce the amount of data in the features 408 to a smaller, more representative set of data.