白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Passive authentication through voice data analysis

專利號
US10867612B1
公開日期
2020-12-15
申請人
United Services Automobile Association (USAA)(US TX San Antonio)
發(fā)明人
Jeffrey Neal Pollack; Michael E'shawn Weaver; Andrew L. Anaruk
IPC分類
G10L17/22; G06F21/32; H04L9/32; G10L17/02; H04L29/06; G10L15/26; G06K9/00; G10L15/02
技術(shù)領(lǐng)域
speech,user,or,data,may,pa,device,cui,in,be
地域: TX TX San Antonio

摘要

Techniques are described for passive authentication based at least partly on collected voice data of a user. During a speech interaction between a user and a personal assistant (PA) device, the user's speech may be analyzed to authenticate the user. The authentication of the user may be a passive authentication, in which the user is not explicitly asked to provide authenticating credentials. Instead, the speech data of the user is collected during the user's interactions with the PA device, and the collected speech data is compared to a previously developed model of the user's speech. The user is successfully authenticated based on determining that there is sufficient correspondence between the collected speech data and the model of the user's speech. After the user is authenticated passively during the conversation, they may be able to access sensitive data or services that may not be otherwise inaccessible without authentication.

說明書

CROSS-REFERENCE TO RELATED APPLICATION

This application claims benefit under 35 U.S.C. § 119 to U.S. Application Ser. No. 62/585,075, filed on Nov. 13, 2017, titled “PASSIVE AUTHENTICATION THROUGH VOICE DATA ANALYSIS,” the entire contents of which are incorporated by reference.

BACKGROUND

Various types of network-connected smart appliances, Internet of Things (IoT) devices, mobile devices, and/or other computing devices have become available to consumers. Such devices may serve a primary function (e.g., a washing machine washing clothes), while also providing smart capabilities for sensing its state and/or the state of the local environment, collecting state data, executing logic, communicating information to other devices over networks, and so forth. Different devices may have different capabilities with regard to data input and data output. For example, a device may accept audio input (e.g., speech input) and provide audio output, but may not include a display for visually presenting data, or may include a limited display that does not support a graphical user interface (GUI). As another example, a device such as a television may include a large display but may lack a full-featured user interface for inputting data.

SUMMARY

Implementations of the present disclosure are generally directed to authenticating a user based at least partly on audio information. More specifically, implementations are directed to collecting voice data from a user through a conversational user interface (CUI) of a device, passively authenticating the user based on the voice data, and controlling access to sensitive information based on the authentication of the user.

權(quán)利要求

1
The invention claimed is:1. A computer-implemented method performed by at least one processor, the method comprising:receiving, by the at least one processor, speech data provided by a user during a speech interaction with a conversational user interface (CUI) executing on a computing device;analyzing, by the at least one processor, the speech data to attempt a passive authentication of the user during the speech interaction, wherein the passive authentication is attempted based at least partly on the speech data and does not include explicitly prompting the user for a credential;storing, by the at least one processor, session information related to the speech interaction with the CUI, the session information including an indication of whether the passive authentication of the user has been successful during the speech interaction;in response to determining that the passive authentication of the user has been successful during the speech interaction, storing, with the session information, a time value that indicates a period of time following a cessation of speech interaction between the user and the CUI after which the successful passive authentication expires;receiving, by the at least one processor, a request to access sensitive information associated with the user, the request submitted by the user through the CUI during the speech interaction; andin response to the request: a) determining, from the session information, that the user has been successfully authenticated during the speech interaction, and b) based on the determination that the user has been successfully authenticated, providing, by the at least one processor, access to the sensitive information through the CUI.2. The method of claim 1, wherein the speech data is further analyzed to identify the user among a plurality of users who are registered as users of the computing device.3. The method of claim 1, further comprising:based on a determination that the passive authentication has expired when the request is received, attempting, by the at least one processor, to actively authenticate the user through the CUI.4. The method of claim 3, wherein attempting to actively authenticate the user includes prompting the user to provide, through the CUI, one or more of a personal identification number (PIN), a password, and a passphrase.5. The method of claim 1, further comprising:receiving, by the at least one processor, video data that is captured by at least one camera of the computing device;wherein the video data is analyzed with the speech data to attempt to passively authenticate the user.6. The method of claim 5, wherein analyzing the video data includes one or more of a facial recognition analysis, a posture recognition analysis, a gesture recognition analysis, and a gait recognition analysis.7. The method of claim 1, wherein analyzing the speech data to attempt to passively authenticate the user includes:providing the speech data as input to a model of a speech pattern of the user, the model having been previously developed based on collected speech data of the user;receiving, from the model, a confidence metric indicating a likelihood that the speech data is spoken by the user; anddetermining that the user is authenticated based on the confidence metric exceeding a threshold value.8. The method of claim 1, wherein the analyzed speech data is audio data that is recorded by at least one microphone of the computing device.9. The method of claim 1, wherein the analyzed speech data is text data that is generated by transcribing at least a portion of audio data that is recorded by at least one microphone of the computing device.10. The method of claim 1, wherein the request to access the sensitive information includes one or more of:a request to access financial account information describing at least one account of the user;a request to perform a financial transaction involving at least one account of the user; anda request to perform a funds transfer involving at least one account of the user.11. A system, comprising:at least one processor; anda memory communicatively coupled to the at least one processor, the memory storing instructions which, when executed by the at least one processor, cause the at least one processor to perform operations comprising:receiving speech data provided by a user during a speech interaction with a conversational user interface (CUI) executing on a computing device;analyzing the speech data to attempt a passive authentication of the user during the speech interaction, wherein the passive authentication is attempted based at least partly on the speech data and does not include explicitly prompting the user for a credential;storing session information related to the speech interaction with the CUI, the session information including an indication of whether the passive authentication of the user has been successful during the speech interaction;in response to determining that the passive authentication of the user has been successful during the speech interaction, storing, with the session information, a time value that indicates a period of time following a cessation of speech interaction between the user and the CUI after which the successful passive authentication expires;receiving a request to access sensitive information associated with the user, the request submitted by the user through the CUI during the speech interaction; andin response to the request: a) determining, from the session information, that the user has been successfully authenticated during the speech interaction, and b) based on the determination that the user has been successfully authenticated, providing, by the at least one processor, access to the sensitive information through the CUI.12. The system of claim 11, wherein the speech data is further analyzed to identify the user among a plurality of users who are registered as users of the computing device.13. The system of claim 11, the operations further comprising:based on a determination that the passive authentication has expired when the request is received, attempting, by the at least one processor, to actively authenticate the user through the CUI.14. The system of claim 13, wherein attempting to actively authenticate the user includes prompting the user to provide, through the CUI, one or more of a personal identification number (PIN), a password, and a passphrase.15. The system of claim 11, the operations further comprising:receiving video data that is captured by at least one camera of the computing device;wherein the video data is analyzed with the speech data to attempt to passively authenticate the user.16. The system of claim 15, wherein analyzing the video data includes one or more of a facial recognition analysis, a posture recognition analysis, a gesture recognition analysis, and a gait recognition analysis.17. The system of claim 11, wherein analyzing the speech data to attempt to passively authenticate the user includes:providing the speech data as input to a model of a speech pattern of the user, the model having been previously developed based on collected speech data of the user;receiving, from the model, a confidence metric indicating a likelihood that the speech data is spoken by the user; anddetermining that the user is authenticated based on the confidence metric exceeding a threshold value.18. The system of claim 11, wherein the analyzed speech data is audio data that is recorded by at least one microphone of the computing device.19. The system of claim 11, wherein the analyzed speech data is text data that is generated by transcribing at least a portion of audio data that is recorded by at least one microphone of the computing device.20. One or more non-transitory computer-readable media storing instructions which, when executed by at least one processor, cause the at least one processor to perform operations comprising:receiving speech data provided by a user during a speech interaction with a conversational user interface (CUI) executing on a computing device;analyzing the speech data to attempt a passive authentication of the user during the speech interaction, wherein the passive authentication is attempted based at least partly on the speech data and does not include explicitly prompting the user for a credential;storing session information related to the speech interaction with the CUI, the session information including an indication of whether the passive authentication of the user has been successful during the speech interaction;in response to determining that the passive authentication of the user has been successful during the speech interaction, storing, with the session information, a time value that indicates a period of time following a cessation of speech interaction between the user and the CUI after which the successful passive authentication expires;receiving a request to access sensitive information associated with the user, the request submitted by the user through the CUI during the speech interaction; andin response to the request: a) determining, from the session information, that the user has been successfully authenticated during the speech interaction, and b) based on the determination that the user has been successfully authenticated, providing, by the at least one processor, access to the sensitive information through the CUI.21. The method of claim 3, further comprising:determining that the active authentication of the user is successful; andin response, applying the speech data from the speech interaction with the CUI to refine a model of the user's speech.
微信群二維碼
意見反饋