白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Passive authentication through voice data analysis

專利號
US10867612B1
公開日期
2020-12-15
申請人
United Services Automobile Association (USAA)(US TX San Antonio)
發(fā)明人
Jeffrey Neal Pollack; Michael E'shawn Weaver; Andrew L. Anaruk
IPC分類
G10L17/22; G06F21/32; H04L9/32; G10L17/02; H04L29/06; G10L15/26; G06K9/00; G10L15/02
技術(shù)領(lǐng)域
speech,user,or,data,may,pa,device,cui,in,be
地域: TX TX San Antonio

摘要

Techniques are described for passive authentication based at least partly on collected voice data of a user. During a speech interaction between a user and a personal assistant (PA) device, the user's speech may be analyzed to authenticate the user. The authentication of the user may be a passive authentication, in which the user is not explicitly asked to provide authenticating credentials. Instead, the speech data of the user is collected during the user's interactions with the PA device, and the collected speech data is compared to a previously developed model of the user's speech. The user is successfully authenticated based on determining that there is sufficient correspondence between the collected speech data and the model of the user's speech. After the user is authenticated passively during the conversation, they may be able to access sensitive data or services that may not be otherwise inaccessible without authentication.

說明書

In some implementations, the analyzed speech data 116 may be the (e.g., raw) audio data of the user's recorded voice. In such implementations, the model 128 of a user's speech may model the particular grammar, syntax, vocabulary, and/or other textual characteristics, and/or the audio characteristics of the user's speech such as pitch, timbre, volume, pace and/or rhythm of speaking, pause patterns, and/or other audio characteristics. The authentication engine 124 may provide the audio data as input to the model 128 for the particular user 102, and the model may compare the audio data to the modeled characteristics of the user's speech to determine a probability that the speaker corresponds to the modeled user. If the probability of a match that is output by the model exceeds a predetermined threshold, the authentication engine 124 may determine that the user 102 has been successfully authenticated.

In some implementations, the authentication of the user 102 is based on video data 118 in addition to speech data 116. For example, the camera(s) 110 in the PA device 104 may capture video and/or still image(s) of the user's face and/or other body parts, and the video data 118 may be provided as input to the model(s) 128. The model(s) 128 may model feature(s) of the user and/or movements of the user in addition to modeling speech characteristics, and the use of the video data 118 may provide a higher-confidence verification of the user's identity compared to verification using audio data without using video data 118. For example, the model 128 may analyze image(s) of the user's face and/or body, and/or video of the user's facial expressions, gestures, gait, and/or other aspects of the user's behavior, to authenticate the user 102.

權(quán)利要求

1
微信群二維碼
意見反饋