白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Video conferencing system

專利號
US11528451B2
公開日期
2022-12-13
申請人
Eyecon AS(NO Stavanger)
發(fā)明人
Jan Ove Haaland; Eivind Nag; Joar Vaage
IPC分類
H04N7/15; G06T7/00; G06T17/20
技術領域
video,camera,data,sensor,or,image,virtual,in,e.g,cameras
地域: Stavanger

摘要

A method of capturing data for use in a video conference includes capturing data of a first party at a first location using an array of one or more video cameras and/or one or more sensors. The three-dimensional position(s) of one or more features represented in the data captured by the video camera(s) and/or sensor(s) are determined. A virtual camera positioned at a three-dimensional virtual camera position is defined. The three-dimensional position(s) determined for the feature(s) are transformed into a common coordinate system to form a single view of the feature(s) as appearing to have been captured from the virtual camera. The video image and/or sensor data of the feature(s) viewed from the perspective of the virtual camera and/or data representative of the transformed three-dimensional position(s) of the feature(s) are then transmitted or stored.

說明書

In one embodiment particular (e.g. identified) features in the video image data and/or the sensor data are selected based on image recognition of these features, e.g. as well as selecting the features based on their three-dimensional positions. This may allow the participant(s) and their face(s) to be selected from the video image data and/or the sensor data.

When multiple virtual cameras are defined, different features may be selected to be shown from the perspective of the different virtual cameras respectively. For example, each virtual camera may be used to portray a single selected feature (e.g. a feature of a participant, such as their eyes, nose or mouth) from the perspective of that virtual camera.

Once the features falling within the volume have been selected for further processing, for example, the video image data and/or the sensor data (e.g. of the selected features) from the video camera(s) in the array are used (e.g. combined) to form a single, composite stream of video image data and/or sensor data which appears as having been captured from the perspective of the virtual camera. Thus preferably the method comprises (and the processing circuitry is configured to) combining the video image data from the one or more video cameras and/or the data captured by the one or more sensors to form the single view of the feature(s) as appearing to have been captured from the virtual camera. Preferably the video image data and/or the sensor data (e.g. of the selected features) are processed (e.g. combined) such that the face(s) and/or eye(s) and/or body of the participant(s) in the captured video image data and/or sensor data are oriented perpendicularly to the direction to them from the virtual camera.

權利要求

1
微信群二維碼
意見反饋