白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Sub-block MV inheritance between color components

專利號(hào)
US11659192B2
公開日期
2023-05-23
申請(qǐng)人
Beijing Bytedance Network Technology Co., Ltd.; Bytedance Inc.(CN Beijing US CA Los Angeles)
發(fā)明人
Kai Zhang; Li Zhang; Hongbin Liu; Yue Wang
IPC分類
H04N19/52; H04N19/176; H04N19/186; H04N19/55; H04N19/119; H04N19/136
技術(shù)領(lǐng)域
mv,motion,signshift,sub,mv0x,mv1x,mv0y,vectors,cu,mv1y
地域: Beijing

摘要

Devices, systems and methods for sub-block based prediction are described. In a representative aspect, a method for video processing includes partitioning a first component of a current video block into a first set of sub-blocks and partitioning a second component of the current video block into a second set of sub-blocks. A sub-block of the second component corresponds to one or more sub-blocks of the first component. The method also includes deriving, based on a color format of the current video block, motion vectors for a sub-block of the second component based on motion vectors for one or more corresponding sub-blocks of the first color component.

說(shuō)明書

CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 17/071,412, filed on Oct. 15, 2020, which is a continuation of International Application No. PCT/IB2019/055247, filed on Jun. 21, 2019, which claims the priority to and benefits of International patent Application No. PCT/CN2018/092118, filed on Jun. 21, 2018, PCT/CN2018/114931, filed on Nov. 10, 2018. All the aforementioned patent applications are hereby incorporated by reference in their entireties.

TECHNICAL FIELD

This patent document is directed generally to image and video coding technologies.

BACKGROUND

Motion compensation is a technique in video processing to predict a frame in a video, given the previous and/or future frames by accounting for motion of the camera and/or objects in the video. Motion compensation can be used in the encoding and decoding of video data for video compression.

SUMMARY

Devices, systems and methods related to sub-block based prediction for image and video coding are described.

權(quán)利要求

1
What is claimed is:1. A method of processing video data, comprising:determining, during a conversion between a current video unit of a video which comprises a luma block and at least one chroma block and a bitstream of the video, that motion vectors of control points for the luma block based on an affine mode;dividing the luma block into luma sub-blocks, wherein each luma sub-block has a second size;dividing a chroma block of the at least one chroma block into chroma sub-blocks, wherein each chroma sub-block has a first size;determining a luma motion vector for each luma sub-block based on the motion vectors of the control points;deriving a chroma motion vector for each chroma sub-block based on luma motion vectors of multiple luma sub-blocks and a color format of the current video unit; andreconstructing the luma block based on the luma motion vector of each luma sub-block;wherein the chroma block comprises at least one first chroma group and the first chroma group includes two or four chroma sub-blocks according to the color format of the current video unit, and wherein motion vectors of the chroma sub-blocks included in the first chroma group are same, andwherein predicted samples of the chroma block are derived using the chroma sub-blocks having the first size equal to the second size in a case that a color format applied for the luma block and the chroma block is 4:2:0 or 4:2:2.2. The method of claim 1, wherein the motion vectors of the chroma sub-blocks included in the first chroma group are derived based on applying a scaling factor to an intermediate motion vector MV*.3. The method of claim 2, wherein the first chroma group includes two chroma sub-blocks in a case that the color format is 4:2:2, two luma sub-blocks corresponding to the two chroma sub-blocks has motion vectors MV0 and MV1 respectively, and wherein the intermediate motion vector MV* is derived based on the MV0 and the MV1.4. The method of claim 3, wherein the intermediate motion vector MV* is derived based on applying an offset-based averaging operation on the MV0 and the MV1.5. The method of claim 4, wherein the intermediate motion vector MV*=Shift(MV0+MV1,1), wherein Shift(x,1)=(x+offset)>>1, offset is equal to 0 or 1, and wherein >>represents a right shift operation.6. The method of claim 2, wherein the first chroma group includes four chroma sub-blocks in a case that the color format is 4:2:0, a top-left one of four luma sub-blocks has motion vector MV0, a top-right one of the four luma sub-blocks has motion vector MV1, a bottom-left one of the four luma sub-blocks has motion vector MV2 and a bottom-right one of the four luma sub-blocks has motion vector MV3, and wherein the four luma sub-blocks correspond to the four chroma sub-blocks and the intermediate motion vector MV* is derived at least based on the MV0 and the MV3.7. The method of claim 6, wherein the intermediate motion vector MV* is derived based on applying an offset-based averaging operation at least on the MV0 and MV3.8. The method of claim 1, further comprising:reconstructing the chroma block based on the motion vectors of the chroma sub-blocks included in the first chroma group.9. The method of claim 8, wherein reconstructing the chroma block in units of the first chroma group.10. The method of claim 1, wherein the conversion includes encoding the current video unit into the bitstream.11. The method of claim 1, wherein the conversion includes decoding the current video unit from the bitstream.12. An apparatus for processing video data comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to:determine, during a conversion between a current video unit of a video which comprises a luma block and at least one chroma block and a bitstream of the video, that motion vectors of control points for the luma block based on an affine mode;divide the luma block into luma sub-blocks, wherein each luma sub-block has a second size;divide a chroma block of the at least one chroma block into chroma sub-blocks, wherein each chroma sub-block has a first size;determine a luma motion vector for each luma sub-block based on the motion vectors of the control points;derive a chroma motion vector for each chroma sub-block based on luma motion vectors of multiple luma sub-blocks and a color format of the current video unit; andreconstruct the luma block based on the luma motion vector of each luma sub-block;wherein the chroma block comprises at least one first chroma group and the first chroma group includes two or four chroma sub-blocks according to the color format of the current video unit, and wherein motion vectors of the chroma sub-blocks included in the first chroma group are same, andwherein predicted samples of the chroma block are derived using the chroma sub-blocks having the first size equal to the second size in a case that a color format applied for the luma block and the chroma block is 4:2:0 or 4:2:2.13. The apparatus of claim 12, wherein the motion vectors of the chroma sub-blocks included in the first chroma group are derived based on applying a scaling factor to an intermediate motion vector MV*.14. The apparatus of claim 13, wherein the first chroma group includes two chroma sub-blocks in a case that the color format is 4:2:2, two luma sub-blocks corresponding to the two chroma sub-blocks has motion vectors MV0 and MV1 respectively, and wherein the intermediate motion vector MV* is derived based on the MV0 and the MV1.15. The apparatus of claim 14, wherein the intermediate motion vector MV* is derived based on applying an offset-based averaging operation on the MV0 and the MV1.16. The apparatus of claim 15, wherein the intermediate motion vector MV*=Shift(MV0+MV1,1), wherein Shift(x,1)=(x+offset)>>1, offset is equal to 0 or 1, and wherein >>represents a right shift operation.17. The apparatus of claim 12, wherein the instructions upon execution by the processor, further cause the processor to:reconstruct the chroma block based on the motion vectors of the chroma sub-blocks included in the first chroma group.18. The apparatus of claim 17, wherein reconstruct the chroma block in units of the first chroma group.19. A non-transitory computer-readable storage medium storing instructions that cause a processor to:determine, during a conversion between a current video unit of a video which comprises a luma block and at least one chroma block and a bitstream of the video, that motion vectors of control points for the luma block based on an affine mode;divide the luma block into luma sub-blocks, wherein each luma sub-block has a second size;divide a chroma block of the at least one chroma block into chroma sub-blocks, wherein each chroma sub-block has a first size;determine a luma motion vector for each luma sub-block based on the motion vectors of the control points;derive a chroma motion vector for each chroma sub-block based on luma motion vectors of multiple luma sub-blocks and a color format of the current video unit; andreconstruct the luma block based on the luma motion vector of each luma sub-block;wherein the chroma block comprises at least one first chroma group and the first chroma group includes two or four chroma sub-blocks according to the color format of the current video unit, and wherein motion vectors of the chroma sub-blocks included in the first chroma group are same, andwherein predicted samples of the chroma block are derived using the chroma sub-blocks having the first size equal to the second size in a case that a color format applied for the luma block and the chroma block is 4:2:0 or 4:2:2.20. A non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by a video processing apparatus, wherein the method comprises:determining, during a conversion between a current video unit of the video which comprises a luma block and at least one chroma block and a bitstream of the video, that motion vectors of control points for the luma block based on an affine mode;dividing the luma block into luma sub-blocks, wherein each luma sub-block has a second size;dividing a chroma block of the at least one chroma block into chroma sub-blocks, wherein each chroma sub-block has a first size;determining a luma motion vector for each luma sub-block based on the motion vectors of the control points;deriving a chroma motion vector for each chroma sub-block based on luma motion vectors of multiple luma sub-blocks and a color format of the current video unit; andgenerating the bitstream based on above determining and dividing, wherein the conversion include reconstructing the luma block based on the luma motion vector of each luma sub-block;wherein the chroma block comprises at least one first chroma group and the first chroma group includes two or four chroma sub-blocks according to the color format of the current video unit, and wherein motion vectors of the chroma sub-blocks included in the first chroma group are same, andwherein predicted samples of the chroma block are derived using the chroma sub-blocks having the first size equal to the second size in a case that a color format applied for the luma block and the chroma block is 4:2:0 or 4:2:2.
微信群二維碼
意見反饋