FIG. 13C illustrates a conversion using the neural network 1310 of a speech signal into a set of haptic outputs for a set of cutaneous actuators 1330, according to an embodiment. The speech signal 1326 may be a sampled waveform, such as the one illustrated. This speech signal 1326 is converted 1328 using the process described above into a sequence of haptic cues, which are subsequently transmitted to a haptic device and causes a set of cutaneous actuators 1330A-N on the haptic device to generate a set of haptic outputs, as shown. After sensing these haptic outputs, a human may be able to determine information about the original speech signal 1326. This may allow a human to understand some aspects of the original speech signal 1326 without having hear the speech signal 1326 or via any other auditory means.
FIG. 13D is a flowchart illustrating a method for training a machine learning circuit to generate a set of compressed haptic cues from an acoustic signal, according to an embodiment. The process described in the flowchart may be performed by the unsupervised learning module 1300. Although a particular arrangement of steps is shown here, in other embodiments the process may be arranged differently.