FIG. 13A is a block diagram illustrating an unsupervised learning module used to train a neural network in compressing an audio input to a sequence of haptic cues, according to an embodiment.
FIG. 13B is a block diagram illustrating the use of the neural network after it has been trained, according to an embodiment.
FIG. 13C illustrates a conversion using the neural network of a speech signal into a set of haptic outputs for a set of cutaneous actuators, according to an embodiment.
FIG. 13D is a flowchart illustrating a method for training a machine learning circuit to generate a set of compressed haptic cues from an acoustic signal, according to an embodiment.
FIG. 14A is a block diagram illustrating the components of a system for converting the syllables of input words to haptic illusion signals to activate cutaneous actuators of a haptic device, according to an embodiment.
FIG. 14B illustrates exemplary sequence of haptic syllable haptic outputs for an example input word, according to an embodiment.
FIG. 14C is a block diagram illustrating the components of a consonant-vowel pair (Abugida) haptic signal converter for converting consonant-vowel pairs of input words to actuator signals, according to an embodiment.
FIG. 14D illustrates an example sequence of haptic C-V pair haptic outputs for an example input word, according to an embodiment.