白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Devices, systems, and methods for distributed voice processing

專利號
US10867604B2
公開日期
2020-12-15
申請人
Sonos, Inc.(US CA Santa Barbara)
發(fā)明人
Connor Kristopher Smith; John Tolomei; Betty Lee
IPC分類
G10L15/22; G10L15/08; H04R3/00; G10L15/30; H04R1/40
技術(shù)領(lǐng)域
playback,wake,nmd,sound,vas,word,voice,device,may,in
地域: CA CA Santa Barbara

摘要

Systems and methods for distributed voice processing are disclosed herein. In one example, the method includes detecting sound via a microphone array of a first playback device and analyzing, via a first wake-word engine of the first playback device, the detected sound. The first playback device may transmit data associated with the detected sound to a second playback device over a local area network. A second wake-word engine of the second playback device may analyze the transmitted data associated with the detected sound. The method may further include identifying that the detected sound contains either a first wake word or a second wake word based on the analysis via the first and second wake-word engines, respectively. Based on the identification, sound data corresponding to the detected sound may be transmitted over a wide area network to a remote computing device associated with a particular voice assistant service.

說明書

TECHNICAL FIELD

The present technology relates to consumer goods and, more particularly, to methods, systems, products, features, services, and other elements directed to voice-controllable media playback systems or some aspect thereof.

BACKGROUND

Options for accessing and listening to digital audio in an out-loud setting were limited until in 2003, when SONOS, Inc. filed for one of its first patent applications, entitled “Method for Synchronizing Audio Playback between Multiple Networked Devices,” and began offering a media playback system for sale in 2005. The SONOS Wireless HiFi System enables people to experience music from many sources via one or more networked playback devices. Through a software control application installed on a smartphone, tablet, or computer, one can play what he or she wants in any room that has a networked playback device. Additionally, using a controller, for example, different songs can be streamed to each room that has a playback device, rooms can be grouped together for synchronous playback, or the same song can be heard in all rooms synchronously.

Given the ever-growing interest in digital media, there continues to be a need to develop consumer-accessible technologies to further enhance the listening experience.

BRIEF DESCRIPTION OF THE DRAWINGS

Features, aspects, and advantages of the presently disclosed technology may be better understood with regard to the following description, appended claims, and accompanying drawings where:

FIG. 1A is a partial cutaway view of an environment having a media playback system configured in accordance with aspects of the disclosed technology.

權(quán)利要求

1
We claim:1. A method comprising:detecting sound via a microphone array of a first playback device;transmitting data associated with the detected sound from the first playback device to a second playback device over a local area network;analyzing, via a wake word engine of the second playback device, the transmitted data associated with the detected sound for identification of a wake word;identifying that the detected sound contains the wake word based on the analysis via the wake word engine;based on the identification, transmitting sound data corresponding to the detected sound from the second playback device to a remote computing device over a wide area network, wherein the remote computing device is associated with a particular voice assistant service;receiving via the second playback device a response from the remote computing device, wherein the response is based on the detected sound;transmitting a message from the second playback device to the first playback device over the local area network, wherein the message is based on the response from the remote computing device and includes instructions to perform an action; andperforming the action via the first playback device.2. The method of claim 1, wherein the action is a first action and the method further comprises performing a second action via the second playback device, wherein the second action is based on the response from the remote computing device.3. The method of claim 1, further comprising disabling a wake word engine of the first playback device in response to the identification of the wake word via the wake word engine of the second playback device.4. The method of claim 3, further comprising enabling a wake word engine of the first playback device after the second playback device receives the response from the remote computing device.5. The method of claim 4, wherein the wake word is a second wake word, and wherein the wake word engine of the first playback device is configured to detect a first wake word that is different than the second wake word.6. The method of claim 1, wherein the first playback device is configured to communicate with the remote computing device associated with the particular voice assistant service.7. The method of claim 1, wherein the remote computing device is a first remote computing device and the voice assistant service is a first voice assistant service, and wherein the first playback device is configured to detect a wake word associated with a second voice assistant service different than the first voice assistant service.8. A first playback device comprising:one or more processors;a computer-readable medium storing instructions that, when executed by the one or more processors, cause the first playback device to perform operations comprising:receiving, from a second playback device over a local area network, data associated with sound detected via a microphone array of the second playback device;analyzing, via a wake word engine of the first playback device, the data associated with the detected sound for identification of a wake word;identifying that the detected sound contains the wake word based on the analysis via the wake word engine;based on the identification, transmitting sound data corresponding to the detected sound to a remote computing device over a wide area network, wherein the remote computing device is associated with a particular voice assistant service;receiving a response from the remote computing device, wherein the response is based on the detected sound; andtransmitting a message to the second playback device over the local area network, wherein the message is based on the response from the remote computing device and includes instructions for the second playback device to perform an action.9. The first playback device of claim 8, wherein the action is a first action and the operations further comprise performing a second action via the first playback device, wherein the second action is based on the response from the remote computing device.10. The first playback device of claim 8, wherein the operations further comprise disabling a wake word engine of the second playback device in response to the identification of the wake word via the wake word engine of the first playback device.11. The first playback device of claim 10, wherein the operations further comprise enabling the wake word engine of the second playback device after the first playback device receives the response from the remote computing device.12. The first playback device of claim 11, wherein the wake word is a first wake word, and wherein the wake word engine of the second playback device is configured to detect a second wake word that is different than the first wake word.13. The first playback device of claim 8, wherein the second playback device is configured to communicate with the remote computing device associated with the particular voice assistant service.14. The first playback device of claim 8, wherein the remote computing device is a first remote computing device and the voice assistant service is a first voice assistant service, and wherein the second playback device is configured to detect a wake word associated with a second voice assistant service different than the first voice assistant service.15. A system, comprising:a first playback device comprising:one or more processors;a microphone array; anda first computer-readable medium storing instructions that, when executed by the one or more processors, cause the first playback device to perform first operations, the first operations comprising:detecting sound via the microphone array;transmitting data associated with the detected sound to a second playback device over a local area network;the second playback device comprising:one or more processors; anda second computer-readable medium storing instructions that, when executed by the one or more processors, cause the second playback device to perform second operations, the second operations comprising:analyzing, via a wake word engine of the second playback device, the transmitted data associated with the detected sound from the first playback device for identification of a wake word;identifying that the detected sound contains the wake word based on the analysis via the wake word engine;based on the identification, transmitting sound data corresponding to the detected sound to a remote computing device over a wide area network, wherein the remote computing device is associated with a particular voice assistant service;receiving a response from the remote computing device, wherein the response is based on the detected sound; andtransmitting a message to the first playback device over the local area network, wherein the message is based on the response from the remote computing device and includes instructions to perform an action,wherein the first computer-readable medium of the first playback device causes the first playback device to perform the action from the instructions received from the second playback device.16. The system of claim 15, wherein the action is a first action and the second operations further comprise performing a second action via the second playback device, wherein the second action is based on the response from the remote computing device.17. The system of claim 15, wherein the second operations further comprise disabling a wake word engine of the first playback device in response to the identification of the wake word via the wake word engine of the second playback device.18. The system of claim 17, wherein the second operations further comprise enabling the wake word engine of the first playback device after the second playback device receives the response from the remote computing device.19. The system of claim 15, wherein the first playback device is configured to communicate with the remote computing device associated with the particular voice assistant service.20. The system of claim 15, wherein the remote computing device is a first remote computing device and the voice assistant service is a first voice assistant service, and wherein the first playback device is configured to detect a wake word associated with a second voice assistant service different than the first voice assistant service.
微信群二維碼
意見反饋