Zero Level Set Methods in Image Segmentation

Evolution of curve can be calculated with initial curve, but the implementation is very complex when there is topological changes. From planar geometry we can find the normal velocity of the curve if the curve is function of curve length, and the magnitude of the velocity is given by curvature of the curve.

Even calculation over curve is much much complex than the calculation over function of pixel which is function of spatial variable x, y and time t. Further some calculus gives the change in the pixel value with time can be expressed as the function of image gradient, which leads us for the easy curve evolution with zero level set of pixel function. Pixel function is moving up or down according to velocity associated with it. Zero level sets are used as curve evolution and the zero level set is found with setting pixel function moving with time up or down.

In third image, we can see black colored boundary line as segmented boundary. 

Curve Evolution and Calculus of variations in Image Processing

As better Numerical Analysis has higher accuracy in discrete image analysis, researchers in image processing are developing algorithm in continuous mathematics and implementing in discrete domain like in digital computer.

Partial Differential Equations are most recently being used in image analysis and the process as:
  1. Image enhancement and deblurrng
  2. Edge detection
  3. Image restoration
  4. Image segmentation
  5. Image inpainting
  6. Motion analysis
Curve evolution is the tool which is used in image processing for image inpainting, image segmentation with calculus of variations, Heat equation which is one of the partial differential equations is used for image deblurring.

Curve evolution studies about the move of the curve in plan or surface. In image segmentation, normal move of the each points in the curve is subject of interest than tangential move.

The calculus of variation studies about the extremal of the curve, During the image segmentation, minimal length curve is of interest and is the area of calculus of variations.  

Principal Component Analysis(PCA) in multivariate data analysis.

It is difficult to visualize sets of data in space if the data variables are more than three. If the numbers of variables in the data set is very large, a lots of computation and lots of data plots are need.

In the analysis of weather data, we can use PCA, in which we can find variable which has most significant role for the weather change and the variables which has least role. For example which parameter plays major role for the precipitation either humidity or temperature or else. PCA could be the tool for the analysis.

Another application of PCA could be in the evaluation of scores in the examination as PCA considers not only data around which scores are centered but also considers how the scores are scattered. whereas classical methods of evaluation only considers the values around which data are centered.

Most recently searching engines are using PCA for the clustering of possible results found, dimension reduction is another use of PCA in data compression. 

Continuous Mathematics in Discrete Image processing

Because recently more expertise in continuous mathematics are interested in discrete image processing, images are treated as continuous object though they are discrete in both space and graylevel and time in case of video images. Continuous algorithms are developed according to continuous mathematics and numerical analysis is used to implement for the analysis of discrete images in discrete domain like computer.

Because of higher accuracy, computer becoming more and more powerful and relatively new area in the image processing, more people are being engaged in image processing with continuous mathematics.

Numerical analysis is the area that brings continuous algorithms to discrete domain with the iteration of infinitesimal operations for the analysis of discrete images.

One of the areas in continuous mathematics, partial differential equation (PDE) is used in image processing in object segmentation, denoising, edge detection, image inpainting and in motion analysis.

Noise modelling in Image Processing

Design of filter is dependent of noise present in image. Most common noises are Gaussian noise, Exponential noise, Uniform noise, Salt and Pepper noise. Different types of filters are used to filter out the noise dependent of noise type. Noise in general is introduced because of motion and sensors.

Generally we all talk about additive noise and very very rarely the multiplicative noise because multiplicative noise are transformed to additive noise in logarithmic domain. So multiplicative noises are not discussed more in most of literatures. 

Mathematical Operations on Image

Averaging/Mean

Averaging is the methods of replacing pixel values which has effects of neighbours. Mean value is used to replace because mean value has minimum MSE. For the heat flow function, diffusion equation is satisfied with flow of pixel value. Blurring of a image is due to diffusion of gray values satisfying heat flow function.

Non local averaging is done to minimize the noise as the noise is proportional to square of number of neighbour similar windows or blocks of images. In general averaging introduces blurring.

Where as median filter, completely reduces noise with replacing noise added pixel with one of the values equal to pixel around it.

Derivatives are used to edge detection where as laplacian function is used to sharpen the image details. Laplacian function is the sum of second derivatives in both x and y directions. 

Image Enhancement and Tools

Because of the visual perceptions dependent of background light or gray value, image enhancement is most useful technique to get more information within the images.

Some Basics of Image Enhancement:

a. Histogram Modifications:

In this operation, pixel values are transformed to new values independent of another pixel values. Some of the operations are Identity transformation(S=C*r), Logarithmic operation (S=C*log(r+1), Inverse operations(S=L-r), Exponential operation(S=Cr^k).

b. Histrogram Equalizations:

In this operation, whatever be the probability distribution of the pixels, new distribution will be of equal probability distribution. In this operations new transfer function will be defined according to the new equalized distribution function. 

Local histogram equalization operation is preferred for higher informative images than Global histogram equalization paying more cost for the equalization operations.

c. Histrogram Matching:

In this operation, new histogram will be given which could not be uniform or can be said random distribution. According to given probability distribution, transfer function should be determined. For this operation; simply given histogram distribution will be mapped to equalized distribution and given new distribution will also be mapped to equalized distribution. From the mapping, we can get inverse mapping function to given required distribution function.

Image Compression

Image compression will be done in two stages.

First compression is done by mapping spatial image to a domain suitable for the compression where mapping concentrates most of information to first coefficient in transformed domain. For the less information loss second, third .. and so on coefficients are to be taken during the reconstruction of image.

second the image compression is done using the symbol coder to represent the transformed coefficient with according to histogram. Higher the probability of occurrence lower the code length and lower the probability higher the code length.

The transform where the most information concentrates on transformed first coefficient that generates minimum Mean-Square-Error is Karhunen-Loeve Transform. But the problem is image dependent transformation matrix coefficients. In a particular case, KLT is exactly equals to (Discrete Cosine Transform) DCT which is Markovian condition where a pixel is dependent of pixel next to it. Another reason to take the DCT as transformation technique is its reverse periodicity.

Once the transformed coefficients are obtained, symbolic representation for the coefficients is done with Huffman Coding where the code length is probability dependent.