白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Video conferencing system

專利號
US11528451B2
公開日期
2022-12-13
申請人
Eyecon AS(NO Stavanger)
發(fā)明人
Jan Ove Haaland; Eivind Nag; Joar Vaage
IPC分類
H04N7/15; G06T7/00; G06T17/20
技術領域
video,camera,data,sensor,or,image,virtual,in,e.g,cameras
地域: Stavanger

摘要

A method of capturing data for use in a video conference includes capturing data of a first party at a first location using an array of one or more video cameras and/or one or more sensors. The three-dimensional position(s) of one or more features represented in the data captured by the video camera(s) and/or sensor(s) are determined. A virtual camera positioned at a three-dimensional virtual camera position is defined. The three-dimensional position(s) determined for the feature(s) are transformed into a common coordinate system to form a single view of the feature(s) as appearing to have been captured from the virtual camera. The video image and/or sensor data of the feature(s) viewed from the perspective of the virtual camera and/or data representative of the transformed three-dimensional position(s) of the feature(s) are then transmitted or stored.

說明書

When the video conference call comprises a one to many call or a many to many call, the video image data for the multiple different parties may be arranged in a collage for display to each of the other parties in a similar manner. In one embodiment, a collage is created of participants from multiple different parties, e.g. of participants from each of the parties except the party to whom the data is being transmitted and/or stored. This creates a virtual location in which the participants from multiple locations are combined (e.g. around a virtual table). For example, each participant (e.g. from multiple different locations) may be positioned evenly around the virtual table such that their orientation to the other participants is consistent.

In one embodiment, e.g. when a collage is created, the participant who is speaking (or other suitable (e.g. selected) feature(s)) is highlighted in the video image and/or sensor data stored and/or transmitted to the other party or parties. The speaking participant (or other suitable (e.g. selected) feature(s)) may be highlighted by increasing the size (i.e. scaling) and/or brightness of their representation in the video image and/or sensor data, or the other regions of the video image and/or sensor data (e.g. surrounding the speaking participant) may be de-emphasised, e.g. by reducing colour saturation or intensity. This helps to create emphasis or focus for the speaking participant or selected feature.

In one embodiment the selected feature(s) may be displayed with their background removed. Instead, the selected feature(s) may be displayed on a neutral background or in a “virtual environment”, which replaces the actual environment that has been removed. The use of a virtual environment may also allow dynamic features such as a virtual screen to be contained in the display.

權利要求

1
微信群二維碼
意見反饋