白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Perceptual three-dimensional (3D) video coding based on depth information

專利號
US10091513B2
公開日期
2018-10-02
申請人
Texas Instruments Incorporated(US TX Dallas)
發(fā)明人
Do-Kyoung Kwon; Madhukar Budagavi; Ming-Jun Chen
IPC分類
H04N19/00; H04N19/154; H04N19/597; H04N19/176; H04N19/124; H04N19/142; H04N13/00
技術(shù)領(lǐng)域
video,depth,perceptual,frame,in,2d,macroblock,frames,coding,quality
地域: TX TX Dallas

摘要

A method for encoding a multi-view frame in a video encoder is provided that includes computing a depth quality sensitivity measure for a multi-view coding block in the multi-view frame, computing a depth-based perceptual quantization scale for a 2D coding block of the multi-view coding block, wherein the depth-based perceptual quantization scale is based on the depth quality sensitive measure and a base quantization scale for the 2D frame including the 2D coding block, and encoding the 2D coding block using the depth-based perceptual quantization scale.

說明書

f(d,d0)=c0·|d?d0|+c1,??(2)
where c0 and c1 are tuning parameters. The tuning parameter c0 is a scaling factor that controls the relationship of perceptual quality sensitivity to depth. For example, if c0=1, quality sensitivity is exactly proportional to depth. The tuning parameter c1 may be used to ensure that some amount of perceptual quality improvement is performed macroblocks in which dfar (or d0)=d.

In some applications, d0=dfar where dfar is the farthest depth of a macroblock in the frame. If d0=dfar is used, the implication is that the farthest object in a scene has the least quality sensitivity. The value of dfar may be, for example, computed as the maximum of the depths of the macroblocks in a frame. In some applications, rather than using dfar, the value of d0 may be set by a user based on known characteristics of the video sequences for a particular application. In another example, video analytics may be performed as the video is captured to determine the depth range of the most visually important area in a scene and the value of d0 adapted accordingly. The values of c0 and c1 may also be adapted based on scene analysis performed by the video analytics.

Other suitable depth quality sensitivity functions may also be used that represent the sensitivity of the perceptual quality of the depth of pixels in a macroblock to the relative depth of those pixels in the frame. For example, the depth-quality senstivity function may be a multi-order polynomial function of d and d0.

Given a depth-quality sensitvity function for an MB, depth-based perceptual distortion can be modeled by

權(quán)利要求

1
微信群二維碼
意見反饋