Next: OBJECT RECOGNITION: A Up: METHODS Previous: Representation of Relationships

Reasoning with Image Data

The previous sections described how individual structures were represented within the model and the relationships between these structures and also objects extracted from the input image. This section details the mechanisms which allow multiple objects, extracted from the input image, to be incorporated into the model framework, and the way in which the labelling of these objects is kept consistent. Consistency of labelling is achieved by use of a "viewpoint" mechanism which allows the modelling of hypothetical alternatives and is similar to the multiple contexts of the assumption based truth maintenance systems (ATMS) [21].

A viewpoint can be thought of as a repository of specific assertions and facts. The data within a viewpoint is specific to that viewpoint and may contradict data in other existing viewpoints. This allows multiple (possibly conflicting) hypotheses to exist simultaneously but independently of each other in separate viewpoints within the model environment. Within a viewpoint, new facts can be deduced (via rule application) which exist only in that viewpoint. From the facts and assertions within a viewpoint new assertions can be made which leads to a new viewpoint being created connected to the original one. The viewpoint network is a directed acyclic graph. Any facts which are true in one viewpoint are inherited along the directed arcs connecting that viewpoint to another. The viewpoint at the top of the graph, known as the root viewpoint, holds all the information that is true across all other viewpoints.

Pairs of generated viewpoints are capable of merging together to form a new viewpoint. This occurs when a rule requires facts held within separate viewpoints to complete its antecedent. The two viewpoints merge, the rule is applied and the resulting consequent becomes true within the new viewpoint.

Viewpoints can thus be generated and merged and they can also be "poisoned". Poisoning a viewpoint makes that viewpoint inactive and consequently it cannot merge with any other viewpoint. The poisoning of a viewpoint is accomplished by the matching of a specific rule whose purpose is to detect contradictions in a viewpoint. The construction of the contradiction rules is very important as they define under what circumstances independent assertions within separate viewpoints can co-exist within the merged viewpoint. These "contradiction rules" are applied as soon as they are matched so as to limit the amount of unnecessary processing done within the viewpoint prior to poisoning. There are two basic conditions under which viewpoints need to be poisoned. Firstly, a viewpoint will need to be poisoned if one of the assertions under which that viewpoint was generated is shown to be untrue. Secondly, and most importantly, a viewpoint will need to be poisoned when the merge of two viewpoints would lead to two contradictory pieces of information existing within the merged viewpoint. This would lead directly to the poisoning of the merge.

Integrating the Image and Model: It is now necessary to explain how information from an image can be input into the mechanism described above. Information is extracted from the input images via the use of an arbitrary edge detector (in the examples shown in this paper the Canny edge detector was used). The result of this edge detector is then passed to the shape representation module described in section 3.2. This results in a series of discrete objects and a shape description of each object. From this a series of parameters applying to each object can be easily found, for example, mean grey level, centre of gravity, principle axis etc. Spatial adjacencies between objects are also available from the shape representation, i.e. two objects can be considered as adjacent if they share a common Delaunay triangle side which has been retained as a boundary section.

The information about each discrete object is then passed to the model environment where a new frame with a unique identifier is generated to represent the object. The values for the slots are obtained from the image information. The information within the new frame is then forwarded to a matching algorithm along with information from the model about a particular anatomical structure.

The result of the match is returned to the model environment. If a sufficiently good match is obtained between the object from the image and the model structure then a new viewpoint is generated with the frame identifying the image object used as the assertion for this viewpoint. A link is also generated which connects the image object frame directly to the model. This link is an INSTANCE-OF connection between the image object frame and the model object frame to which it is matched. This link connects the matched image object to the inheritance network of the high-level model.

These processes result in a number of hypothesised matches each generating a new viewpoint from the model viewpoint (root viewpoint). The processes do not preclude one individual image object from being matched to multiple model structures but each match generates its own independent viewpoint.

Once the viewpoints have been generated, the contradiction rules attempt to poison merges between conflicting viewpoints. These processes result in a number of partial solutions in the form of terminal viewpoints (viewpoints which have no other viewpoints generated from them) and the information within these viewpoints is consistent with the high level model. These partial solutions can now be evaluated individually. This will necessitate closer inspection of the information within the frames describing the image objects and their relations. Any changes to this information will automatically be propagated through the viewpoint network. Fig. 2 diagrammatically shows the viewpoint generation process.



Next: OBJECT RECOGNITION: A Up: METHODS Previous: Representation of Relationships


mceachen@
Fri Jul 15 14:54:31 EDT 1994