白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Processing method and processing system for video data

專利號
US10841490B2
公開日期
2020-11-17
申請人
Fraunhofer-Gesellschaft zur F?rderung der angewandten Forschung e.V.(DE Munich)
發(fā)明人
Martin Lasak; Louay Bassbouss; Stephan Steglich
IPC分類
H04N5/232; H04N21/6587; H04N21/81
技術(shù)領(lǐng)域
fov,video,datasets,client,fovs,in,videos,dataset,dynamic,static
地域: Munich

摘要

It is provided a processing method for video data that can be displayed on at least one display device. A predetermined quantity of static fields of view (FOV) datasets from the video data are precalculated and stored, video data for the temporal transitions between the stored, static FOV data sets are further calculated and stored as dynamic FOV datasets (transition data), wherein, immediately or at a later point in time, a static or dynamic initial FOV and a static or dynamic target FOV are specifically selected for this purpose, in particular by a user, and the video data corresponding in time to the selected FOV datasets, including the dynamic transition data between the initial FOV and the target FOV can be streamed or are streamed.

說明書

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

The video data (FOV partial datasets) corresponding in time to the selected FOV data sets, including the dynamic transition data between the initial FOV and the target FOV are then streamed to the client in the correct sequence to the client. The already precalculated and stored FOV datasets are accessed here.

FIG. 5 shows how, after the selection of an initial data set 15 and the target FOV dataset 17, a dynamic FOV dataset 16 is inserted in whose segments the direction of view 4 changes successively. The streaming of the datasets 15, 16, 17 to the video client 13 is symbolized by the filmstrips 14.

The video sequences (static FOV datasets and dynamic FOV datasets) are here synchronized in such a way that seamless changes between the video sequences are possible at points with fixed definition in the temporal sequence (transition points). For this purpose, the positions of all the time points n*tG (36 in FIG. 3) in the videos are registered (e.g. through byte offsets or storage in separate files).

The calculation for concatenation, which requires relatively little computation, of the resulting video (37 in FIG. 3) is carried out for each video client 13 after the call from the CDN/storage server 5 in the backend or by each video client 13 itself.

e) Display of the video

The video client 13 then begins (at time T0) a session with the replay of an arbitrary, precalculated static FOV dataset 15.

權(quán)利要求

1
微信群二維碼
意見反饋