In one or more embodiments, consonants can be mapped separately, e.g., by using a one-to-one mapping between specific consonants and an actuator at a specific location of the receiving user's body.
In other embodiments, features of speech articulation are encoded for operating certain actuators rather than encoding the features of the speech signal. The features of speech articulation can include, for example, the location of occlusion during oral occlusive sounds: lips ([p], [b]), tongue blade ([t], [d]), tongue body ([k], [g]), or glottis ([?]).
In one embodiment, the machine learning circuit 242 may determine the speech subcomponents based on the extracted features 544 and generate the haptic symbols corresponding to the determined speech subcomponents.
Although 
Additional details regarding converting audio and speech to haptic signals is provided below with reference to 
Example Block Diagram for Machine Learning