The links connecting individual frames give the network its intrinsic meaning. These links are a semantic coding of the neuro anatomical knowledge acquired. There are three types of link between structures:
Although the relative positions of structures "x" and "y" have changed ("right-of" to "anterior-to") the adjacencies between the structures have remained constant. This observation led to the decision to code adjacency between structures rather than just the relative position between structures. However, since the relative position can often be specified, the adjacency links are named in accordance with the expected relative positions. This use of the expected positions means that hypothesis about particular objects extracted from the image can be made on the basis of their expected relative positions coded in the model and then verified using more complex criteria. There are three pairs of links in the model used to code adjacency: 1) anterior-to and posterior-to; 2) superior-to and inferior-to; and 3) left-of and right-of. If any one of these pairs is specified as a link then the second one is automatically inferred, i.e. if object-a is left-of object-b then it is true that object-b is right-of object-a.
Inferring whether a particular object has a particular spatial relationship to another object can be done in one of two ways. Firstly, the information can be extracted directly from the links in the model. For example, the knowledge in the model states via an explicit link that the "left lateral ventricle" is left-of the "left caudate nucleus" and this information can be extracted directly. Secondly, if a process needs to calculate whether an object has a specific spatial relationship to another this can be done by procedures attached to slots. For example if a process needs to know whether an object "A" which has been extracted from the input images is left-of object "B" (also extracted from the input) then a procedure can be called to calculate the result of the query. These attached procedures (one for each spatial relationship) are LISP functions which call C procedures and return a value of true or false. The procedures can be of arbitrary complexity and can use any information available from the frames for the two objects in question. In the examples shown later in this paper these procedures are simple ones which use only the calculated position of the two objects. However they could be extended to use any complex geometrical reasoning strategies. : The second co-existing graph within the model, and a difference between this model of the brain and many others, is the network of "part-hierarchies". This graph specifies anatomical features in terms of their parts and sub-parts. This means that features with a number of individual parts can be expressed in terms of their smaller constituent parts. Many anatomical features form complex three dimensional shapes within the brain. These complex shapes are often difficult to extract as a single entity, and sometimes the whole structure is not present in any set of images. However, these complex structures do have anatomically significant constituent parts which are easier to recognise.
Expressing the part relationships can simplify the whole recognition task. For example, the task "find the ventricular system", can be simplified into the sub-tasks of finding the elements of the ventricular system, these sub-tasks further simplified and so on. Even if a complete solution cannot be obtained, specific partial solutions can.
Each part/sub-part of a structure is represented by a single frame in the same way as for a complete anatomical structure. Each frame has slots for position, shape, and spatial adjacency etc. The use of the part hierarchy makes the expressing of relationships between structures much easier. Spatial adjacencies need only be expressed between features stored at the same level in the part-hierarchy. Expressing the spatial relationships this way means that spatial relationships between features at different levels can be inferred as required. Certain other types of information, such as tissue type, can be inferred although such properties are not inherited automatically in the part hierarchy.
The coding of relations between structures in the model has been very carefully designed. There is a frame for each anatomical occurrence of a particular structure; thus, there are two separate frames for the left-caudate-nucleus and right-caudate-nucleus. The part hierarchy is the premier relation, and access to all structures can be obtained through this pathway.
Inheritance Network: The final co-existing graph within the model is an inheritance network. This network allows frames to inherit values for slots in the absence of values attached directly to the frame in question.
Two types of relation define the inheritance network, namely the "IS-A" relation and the "INSTANCE-OF" relation. These two relations are used to distinguish between class membership within the model and the labelling of entities extracted from the input images. A clear distinction needs to be made between these two types of inheritance so that there is a well-defined boundary between the long-term knowledge coded in the model, and the image dependent information. This becomes particularly important when a large number of potential objects have been detected in the input image.
The "IS-A" relation is used to connect model entities to the more general class of model entity to which it belongs. For example, "left-frontal-lobe IS-A lobe" and "left-pre-central-gyrus IS-A gyrus" both define structures to be members of a more generic class of structures. Information which is general over the whole class is stored in the more general frame and inherited by the more specific occurrence of the class.
The second relation used to define the inheritance network is the "INSTANCE-OF" relation. This relation provides a label for a feature extracted from the input image, the label being a link to a specific model entity. For example "obj1 INSTANCE-OF left-frontal-lobe" and "obj2 INSTANCE-OF right-frontal- lobe" both provide labellings for two objects extracted from the input data (obj1 and obj2) and consequently a link into the knowledge coded within the model. Inheritance may occur through any sequence of IS-A or INSTANCE-OF links. So, in the examples above, "obj1" would inherit the properties of "left-frontal-lobe" via the INSTANCE-OF relation (except where explicitly over-ridden) but it would also inherit the properties of "lobe" through the IS-A link relating "left-frontal-lobe" to the more generic class "lobe".