白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Video conferencing system

專利號(hào)
US11528451B2
公開日期
2022-12-13
申請(qǐng)人
Eyecon AS(NO Stavanger)
發(fā)明人
Jan Ove Haaland; Eivind Nag; Joar Vaage
IPC分類
H04N7/15; G06T7/00; G06T17/20
技術(shù)領(lǐng)域
video,camera,data,sensor,or,image,virtual,in,e.g,cameras
地域: Stavanger

摘要

A method of capturing data for use in a video conference includes capturing data of a first party at a first location using an array of one or more video cameras and/or one or more sensors. The three-dimensional position(s) of one or more features represented in the data captured by the video camera(s) and/or sensor(s) are determined. A virtual camera positioned at a three-dimensional virtual camera position is defined. The three-dimensional position(s) determined for the feature(s) are transformed into a common coordinate system to form a single view of the feature(s) as appearing to have been captured from the virtual camera. The video image and/or sensor data of the feature(s) viewed from the perspective of the virtual camera and/or data representative of the transformed three-dimensional position(s) of the feature(s) are then transmitted or stored.

說明書

The captured image and sound data is passed from the video cameras 32, 42 and microphones 36, 46 to the respective computers 39, 49 where it is analysed by the respective processors 40, 50 (step 102, FIG. 7). The analysis of the video image data captured by the video cameras 32, 42 enables features (e.g. of the user's faces and bodies) to be identified using feature recognition (e.g. by finding points in the video image data containing high contrast).

The three-dimensional (3D) positions of the features captured in the video image data are also determined for each of the video cameras 32, 42, using triangulation between the different video cameras 32, 42 in each array (step 103, FIG. 7). Using this determination of the 3D positions, a depth (z) position is then assigned to each point of each image captured by the video cameras 32, 42.

Using the feature recognition of the video image data, the respective processors 40, 50 determine a location at which to position a virtual camera and the direction in which it should be pointed (step 104, FIG. 7). For example, bodies, faces and/or eyes of users that have been identified in the video image data captured by the video cameras 32, 42 are used to determine the location and direction of the virtual camera. The video image data that is eventually sent to the other party on the video conferencing call will appear to come from the perspective of the virtual camera.

權(quán)利要求

1
微信群二維碼
意見反饋