Next: Results Up: Deformable Boundary Finding Previous: Boundary Finding Parametrization


As we have already discussed in the introduction, the basic premise of this paper is that the integration of the region based segmentation and boundary finding methods result in an improved algorithm for solving the segmentation problem. In this section, we first discuss the motivation behind our approach and then describe it mathematically.

Thus our aim is to devise an integrated boundary finding method which incorporates region based information as well. Hence, as an input to the problem, we have both the actual image and the region classified image , which is obtained from after passing it through a region based segmentation step as has already been described. We shall also assume that the interior of the boundary that we are trying to find belongs to a single region in . As we shall later see, this assumption is not strictly necessary, but for the sake of simplicity, we continue with it. The traditional boundary finding problem does not use the original image directly. As it is a gradient based approach, it uses instead the gradient image , which is formed by taking the first derivative of . As in Staib et al[1], we shall use the magnitude information of the gradient vector at each pixel location. Thus, we obtain from by first convolving with a Gaussian to smooth the effects of noise followed by taking a finite difference approximation to the partial derivatives in the two directions and then calculating the magnitude of the gradient vector. Thus finally, the input to the system is the gradient image and the region classified image .

We shall pose the above boundary estimation problem using gradient and region homogeneity information in the maximum a posteriori framework. This is suitable for incorporating a priori shape information if available. Without the priors, it becomes similar to the maximum likelihood estimation problem.

Thus, we want to maximize , where as described in the previous section, is the vector of parameters used to parametrize the contour.

Thus ignoring the denominator which does not change with '' our aim is to,


where (7) is true as is not a function of with respect to which we are maximizing, and (8) is valid if we neglect the relationship between and . In the last equation, we have just taken the natural logarithm, which is a monotonically increasing function.

The first term in equation (9) corresponds to the prior shape term. When it is non-uniform, it biases the model towards a particular range of shapes. However, since there might be other objects in the image, we would always need an initial estimate of the boundary to start the optimization process. The information fusion that we present in this case increases the reliability of the boundary finding procedure under increased uncertainty in the initial boundary placement.

The second term in the above equation is actually the likelihood term which depends on the gradient image. It is a measure of the likelihood of the contour of the described object being the true boundary, once the parameters defining the boundary is given. This is expressed in terms of a contour integral where the integral is computed over , the curve described by the boundary . At each point on the contour, the strength of the boundary is evaluated by the magnitude of the gradient at that particular point, given by the gradient image. Thus the likelihood of the whole contour being the true boundary becomes proportional to the sum of the magnitude of the gradients at all the points that lie on the boundary. If we assume that the noise can be approximated by a Gaussian, and further assume that the pixels on the boundary are independent, then we may express the above term in the probability expression as the following line integral (Staib et al[1]) where is the variance of the underlying noise process:

The main contribution of this paper lies in the last term ( of eq. 9 ), which incorporates the region based information into the boundary finding framework. As we have already mentioned, we would like the boundary to enclose within it a homogeneous region. In other words, we expect the interior of the boundary to be filled with the region of a single class. Such a notion comes from the fact that in general, the objects that appear in nature are expected to be distinguishable from the exterior by their inherent homogeneity. However, this method can also deal with situations like outlining an annulus, which has the shape of a torus, where the central part might belong to a seperate class. To describe mathematically this term let us consider the situation shown in figure 1. For simplicity's sake let us assume there are only two regions and we want to segment out the object in the center from the surrounding background. Consider that the boundary shown is an estimate at some point in the optimization process. As we can see, the boundary does not include the entire central region. At some points it includes the background and at other places it excludes points that belong to the central region. We would like to penalize cases where points from the surrounding regions are included and would like to reward if more and more of the central region is included. A very simple way of doing this would be to do the following. Let us consider for this simple image, all points that lie inside the central region to have value and all points that lie outside a value . Now as the criteria function sums up the values of all the points that are inside the contour, it is clear that the sum achieves a maxima when the contour is so placed such that it includes all of the points with a value and excludes all of the points that have a value of , that lie on the surrounding region.

If there is more than one region, all pixels of the region that needs to be segmented out can be assigned a positive value and the remaining ones negative values, the magnitudes of which reflect how much one expects the target region to be dissociated from the remaining regions. Hence, remote regions are expected to have high negative value, representing larger penalty for including remote points. This gives this approach a way of handling multiple regions. For an annulus e.g. in a transaxial heart image, it does not matter what value the points of the region the inner boundary of the annulus circumscribes has.

If we once again assume that the pixels on the boundary are independent, and further assume that the distribution of the underlying noise process is a Gaussian, then all the above can be represented by an area integral where is the area bounded by the contour described by the parameter vector .

hence finally we have,

where and are the weighting constants which signifies the relative importance of the two terms in the above equation. Normally, one should choose them such that the contributions from each one of the terms are comparable.

Of the last two terms, one is an area integral and the other is a line integral. In general, computing a line integral is much less expensive compared to an area integral. Thus we would save a lot of computation, especially when we carry out an iterative optimization procedure, if we could convert the area integral to a line integral which we have to compute anyway, as the second term which is present even in the original boundary finding method involves computing a line integral. Actually, the above can be obtained from Green's theorem as follows. Thus we can eliminate further processing of the region classified image which typically involves detecting the binary boundary map.


Hence, using Green's Theorem,


Thus, finally in this section we have presented a boundary finding procedure that introduces a prior term which incorporates information that we might obtain from region based segmentation. Further, using Green's theorem we reduce the whole problem to computing line integrals only rather than both line and area integral.

Next: Results Up: Deformable Boundary Finding Previous: Boundary Finding Parametrization

Mon Mar 21 19:45:28 EST 1994