Complexity of Kolmogorov compression may differentiate different schools of Orthodox iconography

Entropy and nine fractal parameters were used in the exploration of the 1200 Byzantine icon images, as well as artificially generated fractal surface (FS) images, to validate our methodology.

Generation of test images

To validate the proposed approach, we generated 5 × 11 sets of images generated with fractal surfaces using an inverse FFT algorithm and fractal dimensions between 2.0 and 3.0. The grayscale values ​​of a digital image represent a surface, a two-dimensional topological object that more or less fills the third dimension. Thus, the fractal dimensions of grayscale images are between two (minimum complexity) and three (maximum complexity). The generated test images were therefore also generated in this range. Anyway, painting on canvas is made of pigments and a brush, thus producing an often noticeable relief. This, combined with the geometric image, gives 2

Picture 10

8-bit grayscale FS images applying FFT algorithm and fractal dimension between 2.0 and 3.0.

Since the icons to be analyzed had image resolutions between 281×1000 and 1000×1000 pixels, two images were generated with the same fractal dimension of 2.5 and with resolutions of 281×1000 and 1000×1000 pixels (Fig. 11).

Picture 11
figure 11

8-bit grayscale FS images with a fractal dimension of 2.5. (a) 281 × 1000 pixels and (b) 1000×1000 pixels.

Subsequently, these two images were modified by adding Gaussian noise with a standard deviation (SD) of 25, 50, 75 and 100 or by adding salt and pepper noise to test the ability of non-robust fractal or entropic analyzes to noise (Fig. 12) . We added Gaussian noise and salt and pepper noise because these two types of noise are the most common. In image painting, multiplicative noise is not applicable, as it refers to an unwanted random signal that is multiplied into a relevant signal during capture, transmission or other processing, e.g. radar imagery. In other words, it depends on the state of the system. By additive, each pixel of the noisy image is the sum of the true pixel value and a random Gaussian distributed noise value. Therefore, from the justification above, these noises were chosen. A more detailed study of the different types of noise would be interesting but is well beyond the scope of this article. Such a test was necessary because some icons were degraded and had noise in their structure.

Picture 12
figure 12

8-bit grayscale FS images with FD=2.5. Added Gaussian noise with various SD and salt and pepper noises, at 1000×1000 pixel resolution.

To test KC’s ability to quantify patterns for 16-bit grayscale or RGB color images, two images with a resolution such as the smallest and largest icon (281 × 1000 and 1000 × 1000 pixels, respectively ) were generated, using inverse FFT and Midpoint Displacement (MD) algorithms (Fig. 13).

Figure 13
figure 13

8-bit, 16-bit, and RGB FS images with a fractal dimension of 2.5. Top panel: Inverse FFT algorithm. Bottom panel: MD algorithm. Resolutions: 281 × 1000 pixels and 1000 × 1000 pixels.

Image acquisition and preprocessing

The icons were photographed with a digital camera (Nikon D60 digital camera, 18/55 VR lens, RAW format) mounted on a tripod (ZOMEI 55″ Compact Light Weight Travel Portable Folding SLR Camera Tripod). Most of the original icons were photographed. A small number was taken from the electronic archives available on the Internet of certain museums, collections or specialized albums. Details of the acquisition of icon image sources are in the supplementary material. RGB color images were converted to 8-bit grayscale using open-source software ImageJ 1.5332and the resulting images were normalized using the open source software IQM 3.5—Histogram Modification—Normalization, with a low and high shift of 0.00%19. The conversion of the images to 8-bit grayscale was necessary in order to be able to compare the proposed approach with established methods of fractal and non-fractal image texture analyzes that rely on 8-bit images.

Kolmogorov complexity

Kolmogorov complexity (KC) is part of algorithmic information theory and is defined by the length in bytes of the shortest software code to produce a result or an object20. This definition is a theoretical concept that cannot be resolved exactly. It is not possible to decide whether the actual version of the program is really the correct one, so estimates of the KC must be implemented, especially for digital images. Writing a program to generate an object in a digital image might be feasible in some cases, but generally it just isn’t possible to do. However, a reasonable approach is to take the amount of memory needed for a losslessly compressed image as an estimate of that image’s KC.33. It should be noted that compression is not a new trend and is affiliated with entropy, for example the larger the Shannon entropy, say in bits, the more random it seems. In contrast, KC is the algorithmic complexity meant to quantify the algorithmic randomness. When an object cannot be compressed beyond its uncompressed length, the object is algorithmically random34.

Lossless compression algorithms are well researched and optimized for digital images, for example lossless PNG image compression or ZIP compression of any file type. We opted in our analysis to use the PNG algorithm. PNG compression was chosen because the details of the algorithm are well known and the corresponding implementations are very fast in analysis and freely available. We would like to point out that Zenil34 presented the calculation of the Kolmogorov complexity with the CTM coding theorem method and the BDM block decomposition method34. Both of these algorithms offer the advantage that the Kolmogorov complexity can be calculated more accurately, but unfortunately the technical effort is significantly higher. For this empirical study, estimation by PNG compression proved to be sufficient.

The 1200 icons, not having a close shape, produced digital images at different resolutions: the smallest image had a resolution of 281 × 1000 and the largest of 1000 × 1000. Unfortunately, the actual resolution affects the complexity of Kolmogorov. Thus, at equal complexity, low-resolution icons have a lower KC value than high-resolution icons. Bringing images to the same resolution could only be done by resizing an image lengthwise or widthwise, or by cropping. However, this would distort the image, and thus produce an even greater bias in the quantification of KC, eventually becoming useless in pattern classification. However, this bias was successfully removed by normalization, i.e. the ratio of KC to image size in MB, according to Eq. (1).

$$KC=frac{{KC}_X}{S}$$

(1)

where K.C. is the normalized Kolmogorov complexity, ({KC}_X) is the unnormalized Kolmogorov complexity (in MB) and S is the image size (in MB).

Without normalization, KC would show very large differences between resolutions (Fig. 1a,b). Standardization solves this obstacle. There are still resolution differences, but the analyzed icons do not differ much in resolution. Thus, as can be seen in Fig. 2 of the manuscript, KC presents very close values ​​for images at different resolutions 281 × 1000 against 1000 × 1000 pixels which makes us consider that for our icon sets KC may be suitable for quantifying visual complexity.

KC normalization allowed quantification of complexity with values ​​between 0 and 1, regardless of image size. The value 0 is recorded when the image is simple, all pixels have the same gray value, and 1 is reached when each pixel is surrounded by the 8 neighbors with different gray values. As the value increases, the pixel inhomogeneity also increases.

The relevance of the method used was tested by comparing the results obtained with those obtained by nine well-known fractal and non-fractal algorithms, using IQM 3.5: Logical depth, Differential Box-Counting, Relative Differential Box-Counting, Pyramid Dimension, Minkowski Dimension, FFT Dimension, Higuchi Dimension 1D, Higuchi Dimension 2D and Entropy35,36,37,38,39,40. KC turned out to be the most relevant and the fastest of the algorithms used. The comparative analysis is presented in the supplementary material.

Sharon D. Cole