白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Video conferencing system

專利號
US11528451B2
公開日期
2022-12-13
申請人
Eyecon AS(NO Stavanger)
發(fā)明人
Jan Ove Haaland; Eivind Nag; Joar Vaage
IPC分類
H04N7/15; G06T7/00; G06T17/20
技術領域
video,camera,data,sensor,or,image,virtual,in,e.g,cameras
地域: Stavanger

摘要

A method of capturing data for use in a video conference includes capturing data of a first party at a first location using an array of one or more video cameras and/or one or more sensors. The three-dimensional position(s) of one or more features represented in the data captured by the video camera(s) and/or sensor(s) are determined. A virtual camera positioned at a three-dimensional virtual camera position is defined. The three-dimensional position(s) determined for the feature(s) are transformed into a common coordinate system to form a single view of the feature(s) as appearing to have been captured from the virtual camera. The video image and/or sensor data of the feature(s) viewed from the perspective of the virtual camera and/or data representative of the transformed three-dimensional position(s) of the feature(s) are then transmitted or stored.

說明書

In one embodiment the step of identifying features in the video image data or other sensor data (e.g. for each of the video camera(s) and/or sensors(s)) comprises identifying features in one or more regions of the video image data and/or the other sensor data. In one embodiment, the regions of the video image data and/or other sensor data in which features are identified comprise blocks of data. Thus preferably the video image data and/or other sensor data is divided into blocks for the purposes of comparing the video image data and/or other sensor data. The blocks of data preferably comprise square arrays of data elements, e.g. 32×32 or 64×64 pixels (although any suitable and desired shape and size of blocks may be used). Identifying features in regions (e.g. blocks) of the video image data or other sensor data helps to simplify the processing task of identifying such features by reducing the area over which features are identified (and thus the amount of data that has to be processed).

The step of identifying features in the video image data or other sensor data is preferably performed individually for (e.g. each of) the video camera(s) and/or sensors(s). In one embodiment, once this has been performed, the same or similar features that have been identified in the video image data or other sensor data from the plurality of video camera(s) and/or sensors(s) are matched to each other.

權利要求

1
微信群二維碼
意見反饋