In one embodiment, the processor circuit 232 splits the received signals 216 by phoneme decomposition or Mel frequency cepstral coefficients (MFCC) to transform instantaneous power of frequency bands of the received speech signals. For example, the processor circuit 232 may identify phonemes from the signals 216 by computing the MFCC. The machine learning circuit 242, in the form of a feed forward neural network (FFNN) may be trained by back propagation procedure for identifying the phonemes. The extracted MFCC coefficients may then be used as input to a machine learning classifier. When the speech subcomponents are frequencies of the speech signals, the instantaneous power in each frequency band is determined using mel frequency ceptral coefficients.
The processor circuit 232 may map the speech subcomponents to haptic symbols of a haptic symbol set. The haptic symbol set may correspond to words of a generic social touch lexicon. The processor circuit 232 may convert the haptic symbols into the haptic illusion signals 202 or actuator signals. In one embodiment, the processor circuit 232 converts the haptic symbols by using a two-dimensional frequency mapping of first and second formants of the speech subcomponents to determine preferred locations of the cutaneous actuators 208 on the body of a receiving user. In other embodiments other pairs of formants may be used, e.g., the first and the third formants. In still other embodiments, the differences between formants define a point on a 2d map, e.g., f2?f1 and f3?f2 are mapped to a physical location of different actuators 208, where f1, f2, and f3 represent the frequencies of the first, second and third formants, respectively.