The segmentation of structure from 2D and 3D images is an important first step for a variety of image analysis and visualization tasks. We are particularly interested in medical image analysis where examples of these tasks are the registration of images obtained from two modalities, quantitative analysis of anatomical structures, priors for image reconstruction in another modality, cardiac motion tracking and many others.
Some of the issues that make image segmentation difficult and which we try to
address here are as follows:
(1) The imaging process, typically MRI, SPECT, CT normally has inherent limitations which result in noisy, less than perfect images. This however, is always a problem.
(2) While sometimes the sensitivity of the image data when used to solve a given task is very high, like the case with SPECT imaging, the high frequency information in them is often distorted resulting in fuzzy, non-reliable edges.
(3) From image to image even in the same modality, the shape of the same structure can vary.
(4) The grey scale values and their distributions vary from image to image even in a single application like the segmentation of structures from magnetic resonance images. Pixels corresponding to a single class may exhibit different intensities between images or even within the same image.
There are basically two different approaches to image segmentation . One is region based, which relies on the homogeneity of spatially localized features, whereas the other one is based on the methods of boundary finding relying on the gradient features at a subset of the spatial positions of an image (near an object boundary). However, neither approach is able to completely handle all of the above issues on its own.
While the presence of noise is always a limiting factor for any image processing method, region based methods are less affected by it than gradient based boundary finding as the gradient is very noise sensitive. Also for an image, if the high frequency information is missing or unreliable, it makes boundary finding more error-prone compared to region based segmentation. Shape variations, on the other hand, can be better handled using a deformable boundary finding framework when we consider such variations to be generally around an average shape and such information for instance can easily be incorporated as priors. Further, since conventional boundary finding relies on changes in the grey level, rather than their actual values, it is less sensitive to changes in the grey scale distributions over images as against region based segmentation. Also, gradient based methods in general do a better job of edge localization. Hence, we observe that both the methods have their limitations and advantages. The main objective of this paper is to come up with an integrated approach to the image segmentation problem so that the strengths of the above methods can be combined.
There has been only very limited previous work seeking to integrate region growing and edge detection. The difficulty lies in the fact that even though the two methods yield complimentary information, they involve conflicting and incommensurate objectives, as region based segmentation attempts to capitalize on homogeneity properties whereas boundary finding techniques use the non-homogeneity of the same data as a guide. Thus, as observed in , even though integration has long been a desirable goal, achieving this goal is non-trivial. Among the available methods,  uses AI based techniques where production rules are invoked for conflict removal. Other efforts have used probability-based approaches (see e.g. ) where often the aim is to maximize the a posteriori probability of the region classified image given the raw image data by optimization methods like simulated annealing. Integration here is achieved in the local or dense field sense where the edges are used as line processes and the optimization is achieved both over the location of the line processes as well as the pixel classification.
Our overall goal is to develop a fully bi-directional framework for integrating boundary finding and region based segmentation. This would lead to a system where the two modules would operate in parallel so that at each step the outputs of each model get updated using information from the outputs of both the models from the previous iteration. Our initial effort presented in this paper is aimed at using the results of region based segmentation to assist boundary finding and is one portion of the complete system. It is unique for a variety of reasons. First of all unlike the above methods, it tries to integrate boundary finding and region based segmentation rather than edge detection and region growing. Inspite of their similarities, edge detection and boundary finding are basically different procedures. Both use the gradient information, but unlike boundary finding, edge detection is a local process that has little or no notion of shape included in it. The problem of using edge detectors alone for locating the boundary is that the edges found may not necessarily correspond to object boundaries. Also, with the exception of high quality images obtained under unrealistic situations, edge detection will result in spurious and broken edges. This is mainly due to the fact that they rely entirely on the local information available in the image. Thus, any location where the gradient is high is considered to be an edge point even if it is a local noise artifact. On the other hand, boundary finding typically uses more global model based information. Since we aim to integrate region based segmentation and boundary finding we get all the advantages of boundary finding over region based segmentation embedded into this method unlike the existing efforts. Thus, ours is more shape based. This is particularly suited to medical images, where the interesting structures vary around an average shape, characteristic of that task or application.
In methods like , first region growing is done, which is followed by an binary edge detection step. There are a few disadvantages to this procedure. First, a region classified image is often oversegmented due to the presence of noise. Thus, one needs a validating scheme to distinguish between true and false edges by looking at high gradient, continuity, etc. Also, such a scheme has no way of differentiating between multiple regions as it deals with the binary edge map obtained from the region grown image. Further, such a method may suffer from the effects of poor edge localization as is often the case with region based segmentation. On the other hand, the method proposed here eliminates all the intermediate steps making it computationally attractive, and as we shall later see, has the capability to handle multiple regions. Also, since the final boundary obtained here is a compromise between the region classified image and the gradient image, the edges are better localized.
Finally, since our method is a global shape based procedure, compared to the other local methods, it is more robust to noise and outliers.