The sum of squared differences between adjacent block edge pixels is a measure of blockiness that tends to increase with the amount of compression. A similar measure taken away from the block edge provides an estimate of the blockiness in the original image. Lowering the blockiness value to this estimate, while limiting coefficient change to the quantization level, can reduce both apparent blockiness and RMS (root mean square) error. Weighting spatial frequency components by their spatial frequency and then summing their squared values give a measure of image roughness. Some improvement results from rereducing the within-block image roughness if reducing the blockiness has increased it. This method has been presented earlier. [Ref. 1] Here we also describe a DCT amplitude adjustment procedure that itself reduces RMS error and improves the performance of the smoothing algorithm.
The goal of this project is the development of algorithms for improving the quality of images that have already been compressed by a JPEG-like scheme in which the image is divided up into blocks, each block is converted to DCT coefficients, and these coefficients are then quantized. In JPEG there follows a stage of lossless encoding that we ignore for the present purpose. We assume that we have the quantized coefficients and the matrix of quantization values. Our problem is to find an image that is more like the original image than the image obtained simply by performing the inverse DCT on the quantized coefficients.
Figure 1 shows an original image. Figure 2 shows that image after quantization and restoration without any de-blocking. Figure 3 shows the corresponding images after our de-blocking algorithm is applied. Our method can be described as follows:
1) Adjust the amplitudes of the DCT coefficients to reduce the RMS error.
2) Measure the blockiness of the image and estimate how blocky it should be.
3) Lower the blockiness to the estimate.
4) Ensure that all DCT coefficients quantize to those of the compressed image.
5) If the within-block roughness of the image has increased, restore it to its original value.
6) Ensure that all DCT coefficients quantize to those of the compressed image.
In the rest of the paper, we make this description more precise, show some quantitative results for this image. and relate our work to that of others. We conclude that if the quantization is strong enough to generate significant block artifacts, our method gives moderate de-blocking and a small decrease in the RMS image error.
I(u,v) = SUMx(SUMy(i(x,y)*c(x,u)*c(y,v))), x,y,u,v = 0,N-1, (1a)
c(x,u) = alpha(u)*cos(pi*u*(2*x+1)/2N), (1b)
alpha(u) = sqrt(1/N) , u = 0
alpha(u) = sqrt(2/N) , u > 0 (1c)
S(u,v) = Round(I(u,v)/Q(u,v)). (2)
The compressed image contains both the S(u,v) for all the blocks and the Q(u,v). To retrieve the image, first the DCT coefficients are restored (with their quantization error) by
I'(u,v) = S(u,v)*Q(u,v), (3)
where Q(u,v) denotes the quantizer step size used for coefficient I(u,v). The blocks of image pixels are reconstructed by the inverse transform:
i'(u,v) = SUMu(SUMv(I'(u,v)*c(x,u)*c(y,v))), (4)
which for this normalization is the same as the forward transform. Our goal is to find better estimates of these coefficients.
S(u,v) - 0.5 + mu - exp(-1/mu)/(1-exp(-1/mu)), if S(u,v) > 0,
S(u,v) + 0.5 - mu + exp(-1/mu)/(1-exp(-1/mu)), if S(u,v) < 0,
S(u,v), if S(u,v) = 0, (5)
The dotted line in Figure 4 shows the RMS improvement for the image in Figure 1 as a function of the level of quantization, which ranged from 5 to 100 in steps of 5. A constant quantization matrix was used. The example of Figure 2 has a quantization level of 40. For moderately high levels of quantization, the amplitude adjustment was not as effective. Comparing the predicted means of the interval distribution with the actual means, we find that Eq. 5 overestimates the desired correction when the mean of the |S(u,v)| is a small fraction. This is probably caused by the poor fit of the exponential near zero, where the actual distribution is flat.
E = SUM((i1-i2)*(i1-i2)), (6)
The block edge variance E is our measure of image blockiness.
We estimate the desired value of the edge variance by computing the same measure for the pixels just inside the edge on either side and taking the average. If this estimate is less than the edge variance, we attempt to reduce the edge variance to this value. This reduction is done in the direction of the gradient of edge variance and may not be completely achieved if the minimum reduction in this direction is above the next-to-edge variance.
Adjusting the edge variance in this way only alters the edge pixels. The problem has been reduced at the boundary, but a new problem has been created inside the blocks. We attempt to reduce this problem by monitoring a measure of image roughness in the blocks.
R = SUMu,v(u*u+v*v)*I(u,v)*I(u,v), (7)
summed over all blocks. By weighting each component I(u,v) by its spatial frequency sqrt(u*u+v*v), we obtain a measure closely related to the total edge variance inside the block. If this measure increases after reducing the edge variance, we attempt to return it to its original value by changing it along its gradient.
Stevenson [Ref. 6,7] has analyzed this problem from the maximum a posterior (MAP) point of view. The goal is to find the image that maximizes the probability of the image given the quantized image. Using a non-Gaussian Markov random field model for the image distribution, the resulting solution is the minimum of a roughness function similar to ours with the squaring operation replaced by a Huber operation in the space domain. The quantization constraint is also enforced. The method appears to strongly reduce blocking for the Lena picture, JPEG compressed at 30 to 1, but no quantitative measures are reported. The Huber function is reported to be better than the squaring operation at allowing edges in the original image to persist through the smoothing.
Our methods and results are similar to the iterative projection method of Yang, Galatsanos, and Katsaggelos [Ref. 8], They also use edge variance and the quantization constraint. They compute separate horizontal and vertical edge variances and force them to their correct values in the original image by a weighted averaging of edge pixels. They iterate these two constraints in conjunction with the quantization constraint and range constraints in both the space and DCT domains. Since the constraints are projections onto convex sets, iterating them is guaranteed to terminate, since the original image is a solution. They report a 1 dB improvement in RMS error of reconstruction and strong apparent reduction in the blockiness for the 256x256 Lena image when the PSNR for the original reconstruction was 27.9 dB. Our method differs from their method mainly in the addition of the within block smoothness constraint and the estimation of the edge variance.
2. G. Wallace, "The JPEG still picture compression standard", Communications of the ACM, vol. 34, no. 4, pp. 31-44, 1991.
3. W. B. Pennebaker, J. L. Mitchell, JPEG Still Image Data Compression Standard, van Nostrand Reinhold, New York, 1993.
4. S. Wu, A. Gersho, "Enhanced video compression with standardized bit stream syntax", IEEE ISCASSP Proceedings, vol. I, pp. 103-106, 1993.
5. S. Wu, A. Gersho, "Enhancement of transform coding by nonlinear interpolation", Visual Communications and Image Processing '91: Visual Communication vol. 1605, SPIE, Bellingham, WA, pp. 487-498, 1991.
6. Robert L. Stevenson, "Reduction of coding artifacts in transform image coding", IEEE ISCASSP Proceedings, vol. V, pp. 401-404, 1993.
7. T. P. O'Rourke, R. L. Stevenson, "Improved image decompression for reduced transform coding artifacts", In S. Rajala and R. Stevenson, eds., Image and Video Processing II, Conference Proceedings vol. 2182, SPIE, Bellingham, WA, 1994.
8. Y. Yang, N. Galatsanos, A. Katsaggelos, "Iterative Projection algorithms for removing the blocking artifacts of block-DCT compressed images", IEEE ISCASSP Proceedings, vol. V, pp. 405- 408, 1993.