## Fractal Dimension

### What does "Dimension" mean?

The word "Dimension" refers to the exact same concept when we say a line is a 1-dimensional object or a cube is a 3-dimensional object. That is, if we divide a line segment into N (N^1) self-similar pieces, the ratio of the line segment to each of those pieces is N. If we divide a cube into N^3 self-similar pieces, the ratio is N. So the concept of dimension can be informally stated as follows:

"The dimension is simply the exponent of the number of self-similar pieces with magnification factor **N** into which the figure may be broken. " [1], see also [2][3].

### Fractal Dimension

In Fractal Geometry, this idea is used for measuring the complexity of fractal-like objects. Among several different approaches, Box Counting is of my interest. Box Counting [4] is initially designed to quantify fractal scaling and finding the repeating pattern, however, in image processing this technique is employed to obtain a numeric parameter (i.e. dimension) which describes the texture of images.

### Minkowski–Bouligand Dimension

One of the approaches for computing the Fractal Dimension is box counting, introduced by Hermann Minkowski and Georges Bouligand. [5]

The idea is to count the number of boxes of side length ε needed to cover the entire border of a given shape. In the formula on the, ... where ε the length of the box side and N(ε) represents the number of boxes needed to cover the border line. Note that, the smaller the boxes, the greater the number of such boxes.

Formula: *Similarity Dimension*

### Calculation of Fractal dimension

There are various measures related to fractal dimension, such as *Similarity Dimension*, *Box Counting Dimension*, *Correlation Dimension*. In this study, we use the *Box Counting Dimension* which gives an approximation for the S*imilarity Dimension*. Since we are not interested in the absolute values of these parameters but a relative quantity, either of those approaches would be equally valuable.

### Procedure

**First** we need to decide on the set of box-sizes we use as the segmentation of the image.

**Second**, we lay a grid (with each of the box-sizes chosen) on the edge-detected image and count the number of boxes having any intersection with the edges. This gives us a box-size & counts pair.

**Third**, having these pairs calculated for all of the box-sizes, we can calculate the slope of the log-log regression line for these two values.

### Usage in Our Research

We would like to use *fractal dimension *to quantify the complexity of the different regions of the Sun, as a texture parameter. We can do this by finding an appropriate setting for an edge-detection filter to distinguish the edges representing the regions/events of interests, and then quantify those regions in terms of the complexity of the edges.

### Calculation of Fractal dimension

ImageJ is an open-source image processing application that I use here to show the idea of calculating *Box-Counting Dimension*. A small region of the original image and the edge-detection filter applied on it is shown first on the right.

The figure on the right is a screen-shot of the regression line on the plot of log(box-size) against log(count). The values for the used box-size in each iteration is listed in the table underneath the plot (underlined with green). The last column (D = 2.000) shows the slope of the regression line, or in other words, the box-counting dimension.

### In Our Study

In our study, *Fractal Dimension* is one of the 10 parameters that we want to calculate with the hope of being able to downsize the actual images (4k-by-4k) without loosing a significant amount of details. As we did for any other parameters, first, we process the input image in a patch-by-patcu fashion. Patch sizes is decided to be 64-by-64 pixels. We then compute the parameter for each patch independently so that at the end we can have a small image representing the fractal dimension of the original image.

To compute the *Fractal Dimension* parameter, the general idea of the algorithm is as follows:

`Divide the image into small patches,`

`For each patch:`

`Apply Canny-Edge-Detection method,`

(This gives a binary black&white edge-detected matrix.)`Apply a box-counting method on the edge-detected patches for each of the box-sizes,`

[5][7]`Find the regression line of the returned pairs: x:log(box-size), y:log(count),`

`Return -1 * regression slope,`

(as the fractal dimension of this patch.)

`Return the matrix of fractal dimensions.`

### Test Case 1:

On the right you see the twenty different sinusoidal waves used to simulate different levels of complexity of texture. Next to the animation, the calculated Fractal Dimension is shown.

The fact the with the increase in the complexity of the line, the Fractal Dimension monotonically increases proves the correctness of the implementation and the effectiveness of the parameter.

`[01] 0.9654167050541035`

`[02] 0.9660003044052958`

`[03] 0.9619574391027954`

`[04] 0.9834414093654513`

`[05] 0.9926197140997369`

`[06] 1.011016523174157`

`[07] 1.0380998228150276`

`[08] 1.0771789794921698`

`[09] 1.1128948751450953`

`[10] 1.161148024182002`

`[11] 1.1966138493409026`

`[12] 1.229429442177847`

`[13] 1.2764049468478103`

`[14] 1.3022330839831737`

`[15] 1.3424669706414432`

`[16] 1.3680824658079276`

`[17] 1.4096496725967675`

`[18] 1.4297543962103876`

`[19] 1.45847110405711`

`[20] 1.4870892337776815`

### Test Case 2:

This is another test to check sensitivity of this parameter against the noise. It should not be surprising that with the gradual increase in the noise, the Fractal Dimension increases linearly as well. This is because this parameter, in a way, quantifies the noise of a line, which we call the line's complexity.

`[01] 0.983810311776889`

`[02] 0.989551420300899`

`[03] 1.0046541742640487`

`[04] 1.018744795775404`

`[05] 1.040775752844653`

`[06] 1.070079228813243`

`[07] 1.090585105683699`

`[08] 1.110036416545378`

`[09] 1.1075601900298722`

`[10] 1.142063879925015`

`[11] 1.1801502732346345`

`[12] 1.1510731470982436`

`[13] 1.202147059263496`

`[14] 1.1991629749352715`

`[15] 1.2179586133273645`

`[16] 1.2198307803776447`

`[17] 1.2229554391457433`

`[18] 1.222288610614146`

`[19] 1.221580593391697`

`[20] 1.2968940230547106`

The changes in the Fractal Dimension of a sine waves is shown on the left. In a. this parameter is calculated on a sequence of waves with an increasing level of noise. In b. instead of adding noise, we incremented the frequency of the signal. In both cases, the dimension (almost) monotonically increased.

### Finding a reasonable range for Low and High thresholds

Fractal Dimension parameter is calculated based on the edges extracted from the gradient of the given image. The edge detection algorithm we used here (Canny) utilizes Hysteresis thresholding (as opposed to standard thresholding i.e., single value).

"Hysteresis [9] basically consists of determining a **high threshold** that allows a group of pixels to be classified as edge points without using connectivity information among them. A **low threshold** then determines which group of pixels will not be edge points and permits only those points that increment the connectivity of the previously determined edge points to be aggregated as edge points." [9]

The two thresholds mentioned above (high and low) are used on the domain of magnitudes of the gradient vector calculated for each pixel. Therefore, to find a reasonable range for these two thresholds, we would need to study the distribution of magnitude of the gradient vectors.

Below the distributions for JP2, FITS and a clipped FITS are plotted. In addition to the plots, some important quantiles and the chosen low and high thresholds are added.

## JP2

Some quantiles of the magnitudes on a sample JP2 image is as follows:

`# 90% 91% 92% 93% 94% 95% 96% 97% 98% 99% 100% `

`# 34 40 48 59 72 87 104 125 155 210 1973 `

**lt**: {1, 2, 3, 4, 5, 6, 7} X 100**ht**: {1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5} X 100

## FITS

Some quantiles of the magnitudes on a sample FITS image is as follows:

`# 90% 91% 92% 93% 94% 95% 96% 97% 98% 99% 100% `

`# 114 206 326 447 572 706 864 1065 1366 1988 99893`

**lt**:{20,30,35,40,45,50,55,60,65,80,100,120,150,170,200} X 100**ht**:{23,26,29,35,40,45,50,55,60,65,70,75,80,100,150,200,300,400} X 100

## Clipped FITS

Some quantiles of the magnitudes on a sample clipped FITS image is as follows:

`# 90% 91% 92% 93% 94% 95% 96% 97% 98% 99% 100% `

`# 5 6 8 10 12 14 17 21 26 35 471`

**lt**:{.1,.2,.3,.4,.5,.6,.7,.8,.9,1} X 100**ht**:{.2,.3,.4,.5,.6,.7,.8,.9,1,1.1,1.2,1.3,1.4,1.5,1.6,1.7} X 100

My box-counting method follows the `count`

method in `FractalBoxCounter`

class in ImageJ API. Here (Search for `FractalBoxCounter`

)

A Great implementation of `CannyEdgeDetector`

by *Tom Gibara* > Here

### Canny Edge Detection

Below, several outputs of the Canny Edge Detection filter (implemented by Tom Gibara) is shown. As one can see the usage of his class on the right, the precision of the output depends on the threshold parameter given to the algorithm.

can be seen with different threshold ranges.

### Usage:

`CannyEdgeDetector detector = new CannyEdgeDetector(); `

`detector.setLowThreshold(0.5f); `

`detector.setHighThreshold(1.0f);`

`detector.setSourceImage(frame);`

`detector.process();`

`BufferedImage edges = detector.getEdgesImage();`

The Original Image (4096 X 4096)

`detector.setLowThreshold(0.1f); `

`detector.setHighThreshold(0.3f);`

`detector.setLowThreshold(0.5f); `

`detector.setHighThreshold(0.6f);`

`detector.setLowThreshold(0.8f); `

`detector.setHighThreshold(1.0f);`

`detector.setLowThreshold(0.9f); `

`detector.setHighThreshold(1.0f);`

[1] Great class notes for "*The Dynamical Systems and Technology Project at Boston University*". > Here

[2] Wikipedia: Fractal Dimension > Role of scaling > Here

[3] Fractal Explorer: Chapter 4 - Calculating: Fractal Dimension > Here

[4] Wikipedia: Box Counting > Here

[5] Wikipedia: Minkowski-Bouligand Dimension > Here

[6] MathWorks Documentation > Gradient > Numerical gradient > Here

[7] FractalBoxCounter class > count(...) method > Here

[8] Automated detection of proliferative retinopathy in clinical practice, Audrey Karperien, et al. 2008 > Here

[9] On candidates selection for hysteresis thresholds in edge detection, R.Medina-Carnicer, et al. > Here