Adding Semantics to Decision Tables ◾ 317
◾ Weights on the graph arcs stored in SDT. e ontology engineers must adjust
the weights and thus need to know how to assign meaningful weights based
on ontological commitments.
◾ e value of λ in GRASIM. Our ontology graph is directed. A lexon has forward
and backward directions. e λ value is for balancing the results calculated from
two directions. We observe that this value does not greatly aect the nal simi-
larity score if the weights on the arcs in two directions are well balanced. e
value of λ signicantly aects the score if the weights are not well balanced.
◾ Structure of the ontology. As GRASIM uses shortest path values to calculate
similarity scores, the scores will more likely increase if more arcs (relations
between concepts) are introduced.
◾ Two annotation sets. e annotation sets also aect the nal similarity scores;
they are two subgraphs in the ontology graph. When these two annotation sets
almost fully overlap, the similarity score is very high. When they are completely
disparate, the similarity score depends heavily on the shortest distance between
the two graphs. If the shortest paths to all the nodes from these two subgraphs
are small, the score is high. If they are large, the similarity score is low.
◾ Expertise level. Suppose we have expert A and expert B. Expert A is the
domain expert who provides descriptions for the source materials used to cre-
ate the ontology and annotation sets. Expert B is the evaluator who provides
the expected similarity scores. e dierences of the understandings of expert
A and expert B of the competency objects will aect the evaluation results.
If all the above factors are well analyzed, a very good evaluation will result and
GRASIM will work properly. How to adjust these factors in a continuously evolv-
ing evaluation environment presents an interesting future project.
11.5 Discussion
Knowledge engineers (including ontology engineers) are responsible for analyzing
the raw materials provided by end users, helping them to formalize ontologies, and
conguring the parameters in the matcher using SDTs. End users (including testers
and evaluators) are considered nontechnical domain experts who are responsible
for providing domain knowledge in documents. ey also help test and evaluate
matches.
Knowledge engineers annotate the learning materials and company values. In
particular, they ask the company trainers (as end users) to provide textual descrip-
tion for learning component materials, and the HR manager to provide textual
descriptions for company values. After the materials are annotated within the
domain ontologies and the decision rules are properly presented in SDTs, tests are
run; the results are illustrated in the next subsection.