In an embodiment of the present disclosure, the depth information may be obtained in any one of the following manners (1) to (3): (1) By using a preset depth information template, wherein the template is a standard image including depth information, and specifically, it may be a standard face image, where feature points of the standard face image contain depth information of the feature points. (2) Real-time calculation, wherein approximate values of the image depth is calculated by capturing images from multiple angles. (3) Estimating the depth value based on a model or prior knowledge, wherein approximate depth information of the total feature points is obtained by using, for example, a neural network and the like, and he depth information may be obtained locally or from a server, and the estimation process and the calculation process may be performed locally or in the cloud server. After the depth information is obtained, the depth information is brought into corresponding feature points in the first image to obtain the first image having depth information, that is, the processed image.
The core idea of this embodiment is: when it is detected that the image includes a face image, image processing is performed on the face image, and then depth information is added to the processed image. Specifically, the first processing and the operation of adding the depth information may be combined into a special filter or sticker, which is convenient for users to use. After the above-mentioned face image processing, the obtained image not only has the effect of the first processing, but also has depth information, so that the face image looks more realistic and stereoscopic.