Attention is now directed towards embodiments of user interfaces (“UI”) and associated processes that are implemented on an electronic device, such as portable multifunction device 100, device 300, or device 500.
FIGS. 6A-6L illustrate exemplary techniques for processing user requests, in accordance with some embodiments. The techniques described in these figures are used to illustrate the processes described below, including the processes in FIGS. 7A-7C.
Generally, FIGS. 6A-6L illustrate a variety of scenarios in which a device, such as a smart speaker, receives user input and performs voice identification based on the user input. If a user is identified based on the user input, the device processes one or more requests of the user input based on account data associated with the identified user. Exemplary techniques for voice identification and for configuring devices to perform voice identification are discussed in:
-
- “Personalized Hey Siri.” Apple Machine Learning Journal, vol. 1, no. 9, April 2018; and
- E. Marchi, S. Shum, K. Hwang, S. Kajarekar, S. Sigtia, H. Richards, R. Haynes, Y. Kim, and J. Bridle. “Generalised Discriminative Transform via Curriculum Learning for Speaker Recognition.” Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), April 2018.
The contents of these publications are hereby incorporated by reference in their entireties.