白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Video conferencing system

專利號
US11528451B2
公開日期
2022-12-13
申請人
Eyecon AS(NO Stavanger)
發(fā)明人
Jan Ove Haaland; Eivind Nag; Joar Vaage
IPC分類
H04N7/15; G06T7/00; G06T17/20
技術(shù)領(lǐng)域
video,camera,data,sensor,or,image,virtual,in,e.g,cameras
地域: Stavanger

摘要

A method of capturing data for use in a video conference includes capturing data of a first party at a first location using an array of one or more video cameras and/or one or more sensors. The three-dimensional position(s) of one or more features represented in the data captured by the video camera(s) and/or sensor(s) are determined. A virtual camera positioned at a three-dimensional virtual camera position is defined. The three-dimensional position(s) determined for the feature(s) are transformed into a common coordinate system to form a single view of the feature(s) as appearing to have been captured from the virtual camera. The video image and/or sensor data of the feature(s) viewed from the perspective of the virtual camera and/or data representative of the transformed three-dimensional position(s) of the feature(s) are then transmitted or stored.

說明書

When, as outlined below, features falling outside a particular volume are discarded, this may be performed simply using the xy coordinates of the common coordinate system, once the transformation has been performed, e.g. owing to these features being outside of the viewing frustum of the virtual camera. Furthermore, features that obscure each other owing to having the same xy coordinate but different z coordinates (e.g. following transformation) may be identified and the features appearing further away from the virtual camera may be discarded, e.g. such that only the one that is closest to the virtual camera is retained.

Preferably a depth (z) buffer (e.g. in the coordinate system of the virtual camera) is defined and filled with the (e.g. transformed) depth (z) position of each of the features represented in the video image data and/or the sensor data. If any (e.g. depth) data is missing at this stage for any of the features represented in the video image data and/or the sensor data, preferably this data is interpolated from the data which is present.

When a plurality of virtual cameras have been defined, preferably a separate depth buffer is defined and filled for each virtual camera.

Using the transformed three-dimensional position(s) in the common coordinate system of the feature(s) in the video image data and/or sensor data, preferably the method comprises (and the processing circuitry is configured to) selecting the feature(s) in the video image data and/or the sensor data having transformed three-dimensional position(s) in the common coordinate system that are within a particular range of three-dimensional positions. Thus a three-dimensional volume is set and features falling within this volume are selected.

權(quán)利要求

1
微信群二維碼
意見反饋