白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Living body recognition method, storage medium, and computer device

專利號
US11176393B2
公開日期
2021-11-16
申請人
TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED(CN Shenzhen)
發(fā)明人
Shuang Wu; Shouhong Ding; Yicong Liang; Yao Liu; Jilin Li
IPC分類
G06K9/62; G06K9/00; G06T7/194
技術(shù)領(lǐng)域
facial,image,liveness,confidence,model,training,live,target,feature,face
地域: Shenzhen

摘要

A face liveness recognition method includes: obtaining a target image containing a facial image; extracting facial feature data of the facial image in the target image; performing face liveness recognition according to the facial feature data to obtain a first confidence level using a first recognition model, the first confidence level denoting a first probability of recognizing a live face; extracting background feature data from an extended facial image, the extended facial image being obtained by extending a region that covers the facial image; performing face liveness recognition according to the background feature data to obtain a second confidence level using a second recognition model, the second confidence level denoting a second probability of recognizing a live face; and according to the first confidence level and the second confidence level, obtaining a recognition result indicating that the target image is a live facial image.

說明書

RELATED APPLICATIONS

This application is a continuation application of PCT Patent Application No. PCT/CN2018/114096, filed on Nov. 6, 2018, which claims priority to Chinese Patent Application No. 2017111590398, filed with the Chinese Patent Office on Nov. 20, 2017 and entitled “LIVING BODY RECOGNITION METHOD AND APPARATUS, STORAGE MEDIUM, AND COMPUTER DEVICE”, which is incorporated herein by reference in its entirety.

FIELD OF THE TECHNOLOGY

The present application relates to the field of computer technologies, and in particular, to a living body recognition method, a storage medium, and a computer device.

BACKGROUND

With ongoing development of computer technologies, a user can perform more and more operations on a computer, such as applying for a loan, taking a remote examination, or exercising remote control. Before performing each of these operations, the user usually needs to perform authentication. As a valid means of authentication, facial recognition with face liveness detection has been applied in many scenes.

In the conventional facial recognition technology with face liveness detection, a real human being usually needs to be distinguished from a photo by detecting an interactive action such as head shaking and eye blinking. However, this recognition manner requires cooperation of the user. Face liveness detection is not practicable until the user makes correct interactive actions as indicated, thereby resulting in a low detection rate of the face liveness.

SUMMARY

According to various embodiments of the present disclosure, a living body or face liveness recognition method, a storage medium, and a computer device are provided.

權(quán)利要求

1
What is claimed is:1. A face liveness recognition method for a computer device, comprising:obtaining a target image containing a facial image;extracting facial feature data of the facial image in the target image;performing face liveness recognition according to the facial feature data to obtain a first confidence level using a first recognition model, the first confidence level denoting a first probability of recognizing a live face;extracting background feature data from an extended facial image, the extended facial image being obtained by extending a region that covers the facial image, and the background feature data reflecting features of a background part in the extended facial image, wherein extracting the background feature data from the extended facial image comprises:determining a facial region in the target image;extending the facial region to obtain an extended facial region;obtaining the extended facial image in the target image along the extended facial region; andinputting the extended facial image into a second recognition model, and extracting the background feature data of the extended facial image through the second recognition model through a convolution layer of the second recognition model;performing face liveness recognition according to the background feature data to obtain a second confidence level using the second recognition model, the second confidence level denoting a second probability of recognizing a live face, wherein performing the face liveness recognition according to the background feature data comprises:classifying the target image through a fully connected layer of the second recognition model according to the extracted background feature data to obtain the second confidence level of the target image being a live facial image; andaccording to the first confidence level and the second confidence level, obtaining a recognition result indicating that whether the target image is the live facial image.2. The method according to claim 1, wherein the extracting facial feature data of a facial image in the target image comprises:determining the facial region in the target image;obtaining the facial image in the target image along the facial region; andinputting the facial image into the first recognition model, and extracting facial feature data of the facial image through the first recognition model.3. The method according to claim 2, wherein:the inputting the facial image into the first recognition model, and extracting facial feature data of the facial image through the first recognition model comprise: inputting the facial image into the first recognition model; andextracting facial feature data of the facial image through a convolution layer of the first recognition model, the performing face liveness recognition according to the facial feature data to obtain a first confidence level comprises: classifying the target image through the fully connected layer of the first recognition model according to the extracted facial feature data to obtain the first confidence level of the target image being a live facial image.4. The method according to claim 3, further comprising:obtaining an image sample set, the image sample set comprising a live facial image and a non-live facial image;obtaining a facial image in a corresponding image sample along a facial region of each image sample in the image sample set to obtain a first training sample; andtraining the first recognition model according to the first training sample.5. The method according to claim 4, wherein the training the first recognition model according to the first training sample comprises:obtaining an initialized first recognition model;determining a first training label corresponding to the first training sample;inputting the first training sample into the first recognition model to obtain a first recognition result; andadjusting model parameters of the first recognition model according to a difference between the first recognition result and the first training label.6. The method according to claim 1, further comprising:obtaining an image sample set, the image sample set comprising a live facial image and a non-live facial image;obtaining an extended facial image in a corresponding image sample along an extended facial region of each image sample in the image sample set to obtain a second training sample; andtraining the second recognition model according to the second training sample.7. The method according to claim 6, wherein the training the second recognition model according to the second training sample comprises:obtaining an initialized second recognition model; determining a second training label corresponding to the second training sample;inputting the second training sample into the second recognition model to obtain a second recognition result; andadjusting model parameters of the second recognition model according to a difference between the second recognition result and the second training label.8. The method according to claim 1, wherein the obtaining a target image comprises:entering an image acquisition state; andselecting an acquired image frame as the target image in the image acquisition state.9. The method according to claim 1, wherein the obtaining a recognition result indicating that the target image is a live facial image comprises:integrating the first confidence level and the second confidence level to obtain a confidence level of the target image being a live facial image; andwhen the confidence level reaches a preset confidence level threshold, determining that the target image is a live facial image.10. A computer device, comprising: a memory storing computer-readable instructions; and a processor coupled to the memory for executing the computer-readable instructions to perform:obtaining a target image containing a facial image; extracting facial feature data of the facial image in the target image;performing face liveness recognition according to the facial feature data to obtain a first confidence level using a first recognition model, the first confidence level denoting a first probability of recognizing a live face;extracting background feature data from an extended facial image, the extended facial image being obtained by extending a region that covers the facial image, wherein extracting the background feature data from the extended facial image comprises:determining a facial region in the target image;extending the facial region to obtain an extended facial region;obtaining the extended facial image in the target image along the extended facial region; andinputting the extended facial image into a second recognition model, and extracting the background feature data of the extended facial image through the second recognition model through a convolution layer of the second recognition model;performing face liveness recognition according to the background feature data to obtain a second confidence level using the second recognition model, the second confidence level denoting a second probability of recognizing a live face, wherein performing the face liveness recognition according to the background feature data comprises:classifying the target image through a fully connected layer of the second recognition model according to the extracted background feature data to obtain the second confidence level of the target image being a live facial image; andaccording to the first confidence level and the second confidence level, obtaining a recognition result indicating that the target image is the live facial image.11. The computer device according to claim 10, wherein the extracting facial feature data of a facial image in the target image comprises:determining the facial region in the target image;obtaining the facial image in the target image along the facial region; andinputting the facial image into the first recognition model, and extracting facial feature data of the facial image through the first recognition model.12. The computer device according to claim 11, wherein:the inputting the facial image into the first recognition model, and extracting facial feature data of the facial image through the first recognition model comprise: inputting the facial image into the first recognition model; andextracting facial feature data of the facial image through a convolution layer of the first recognition model, the performing face liveness recognition according to the facial feature data to obtain a first confidence level comprises: classifying the target image through the fully connected layer of the first recognition model according to the extracted facial feature data to obtain the first confidence level of the target image being a live facial image.13. The computer device according to claim 12, wherein the processor further performs:obtaining an image sample set, the image sample set comprising a live facial image and a non-live facial image;obtaining a facial image in a corresponding image sample along a facial region of each image sample in the image sample set to obtain a first training sample; andtraining the first recognition model according to the first training sample.14. The computer device according to claim 13, wherein the training the first recognition model according to the first training sample comprises:obtaining an initialized first recognition model;determining a first training label corresponding to the first training sample;inputting the first training sample into the first recognition model to obtain a first recognition result; andadjusting model parameters of the first recognition model according to a difference between the first recognition result and the first training label.15. The computer device according to claim 10, wherein the processor further performs:obtaining an image sample set, the image sample set comprising a live facial image and a non-live facial image;obtaining an extended facial image in a corresponding image sample along an extended facial region of each image sample in the image sample set to obtain a second training sample; andtraining the second recognition model according to the second training sample.16. A non-transitory storage medium storing computer program instructions executable by at least one processor to perform:obtaining a target image containing a facial image; extracting facial feature data of the facial image in the target image;performing face liveness recognition according to the facial feature data to obtain a first confidence level using a first recognition model, the first confidence level denoting a first probability of recognizing a live face;extracting background feature data from an extended facial image, the extended facial image being obtained by extending a region that covers the facial image, wherein extracting the background feature data from the extended facial image comprises:determining a facial region in the target image;extending the facial region to obtain an extended facial region;obtaining the extended facial image in the target image along the extended facial region; andinputting the extended facial image into a second recognition model, and extracting the background feature data of the extended facial image through the second recognition model through a convolution layer of the second recognition model;performing face liveness recognition according to the background feature data to obtain a second confidence level using the second recognition model, the second confidence level denoting a second probability of recognizing a live face, wherein performing the face liveness recognition according to the background feature data comprises:classifying the target image through a fully connected layer of the second recognition model according to the extracted background feature data to obtain the second confidence level of the target image being a live facial image; andaccording to the first confidence level and the second confidence level, obtaining a recognition result indicating that the target image is the live facial image.17. The method according to claim 1, wherein the background feature data includes distribution of color values of pixel points in a background image and pixel continuity features of the background image.
微信群二維碼
意見反饋