白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Video conferencing system

專利號(hào)
US11528451B2
公開(kāi)日期
2022-12-13
申請(qǐng)人
Eyecon AS(NO Stavanger)
發(fā)明人
Jan Ove Haaland; Eivind Nag; Joar Vaage
IPC分類
H04N7/15; G06T7/00; G06T17/20
技術(shù)領(lǐng)域
video,camera,data,sensor,or,image,virtual,in,e.g,cameras
地域: Stavanger

摘要

A method of capturing data for use in a video conference includes capturing data of a first party at a first location using an array of one or more video cameras and/or one or more sensors. The three-dimensional position(s) of one or more features represented in the data captured by the video camera(s) and/or sensor(s) are determined. A virtual camera positioned at a three-dimensional virtual camera position is defined. The three-dimensional position(s) determined for the feature(s) are transformed into a common coordinate system to form a single view of the feature(s) as appearing to have been captured from the virtual camera. The video image and/or sensor data of the feature(s) viewed from the perspective of the virtual camera and/or data representative of the transformed three-dimensional position(s) of the feature(s) are then transmitted or stored.

說(shuō)明書

Preferably the comparison of identified features in the video image data and/or other sensor data from different video camera(s) and/or sensor(s) in the array takes into account the scale and rotation of the identified features, e.g. owing to an identified feature appearing differently depending on the relative location of the video camera(s) and/or sensor(s).

The matching of identified features in the video image data and/or other sensor data is preferably performed for the video image data and/or other sensor data from one or more pairs of video camera(s) and/or sensor(s) in the array. Matched features (e.g. that pass the threshold applied to the metric) are deemed a pair (and the data flagged or stored as such). Identified features that are not matched, or are matched with two or more other identified features, may be stored for later use.

At this stage, preferably a depth map, a 3D point cloud, a 3D mesh or a depth buffer is created for each pair of video camera(s) and/or sensor(s) in the array, e.g. between which identified feature(s) have been matched, for storing the (e.g. depth component of the) determined three-dimensional position(s) of the identified and matched feature(s). As outlined above, preferably the depth component of the three-dimensional position(s) of the identified and matched feature(s) is determined by determining the displacement between (e.g. by triangulating the positions of) the features using the video image data and/or the other sensor data from the array of video camera(s) and/or sensor(s).

權(quán)利要求

1
微信群二維碼
意見(jiàn)反饋