In step E213, in a standard manner, a predicted block P is constructed according to the prediction mode chosen in step E211. Then, the prediction residue R is obtained by calculating the difference for each pixel between the predicted block P and the original current block.
In a step E214, the prediction residue R is transformed into RT.
In step E214, a frequency transform is applied to the residue block R in order to produce the block RT comprising transform coefficients. The transform could be a DCT-type transform for example. It is possible to choose the transform to be used from a predetermined set of transforms ET and to inform the decoder of the transform used.
In a step E215, the transformed residue block RT is quantized using for example a scalar quantization of quantization step δ1. This produces the quantized transformed prediction residue block RTQ.
In a step E216, the coefficients of the quantized block RTQ are coded by an entropy encoder. For example, the entropy coding specified in the HEVC standard can be used.
In a known manner, the current block is decoded by dequantizing the coefficients of the quantized block RTQ, then applying the inverse transform to the dequantized coefficients to obtain the decoded prediction residue. The prediction is then added to the decoded prediction residue in order to reconstruct the current block and obtain its decoded version. The decoded version of the current block can then be used later to spatially predict other neighbouring blocks of the image or to predict blocks of other images by inter-image prediction.