白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Video conferencing system

專利號
US11528451B2
公開日期
2022-12-13
申請人
Eyecon AS(NO Stavanger)
發(fā)明人
Jan Ove Haaland; Eivind Nag; Joar Vaage
IPC分類
H04N7/15; G06T7/00; G06T17/20
技術(shù)領(lǐng)域
video,camera,data,sensor,or,image,virtual,in,e.g,cameras
地域: Stavanger

摘要

A method of capturing data for use in a video conference includes capturing data of a first party at a first location using an array of one or more video cameras and/or one or more sensors. The three-dimensional position(s) of one or more features represented in the data captured by the video camera(s) and/or sensor(s) are determined. A virtual camera positioned at a three-dimensional virtual camera position is defined. The three-dimensional position(s) determined for the feature(s) are transformed into a common coordinate system to form a single view of the feature(s) as appearing to have been captured from the virtual camera. The video image and/or sensor data of the feature(s) viewed from the perspective of the virtual camera and/or data representative of the transformed three-dimensional position(s) of the feature(s) are then transmitted or stored.

說明書

In one set of embodiments the method comprises (and the processing circuitry is configured to) forming one or more point clouds using the determined three-dimensional position(s) of one or more identified and matched features, e.g. using the depth maps created, for each pair of video camera(s) and/or sensor(s) in the array, e.g. between which identified features have been matched. These initial “sparse” point cloud(s) may not contain many data points, e.g. owing to them only representing a single or a few identified features. However, such point cloud(s) may be helpful to act as a guide for the creation of more dense and accurate point cloud(s).

Preferably the information from the point cloud(s) (e.g. the location of the identified feature(s)) is used in an iterative process to re-analyse the identified and matched feature(s). For example, the positions in the point cloud(s) may be used to test against (e.g. the determined positions of) one or more of the identified features (whether matched or not) to determine if they have been correctly matched or not. This may be used to change the matching of identified features from different video camera(s) and/or sensor(s) and/or to refine the position of the identified and matched feature(s) in the point cloud(s). The number of iterations used may depend on the precision desired and/or on the processing time available.

權(quán)利要求

1
微信群二維碼
意見反饋