314 ◾  Yan Tang, and Robert Meersman, and Jan Vanthienen
0.8
Similarity Result (part 2)
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Problem solving and decision making
Flexibility in changing circumstances
Communication
Understanding the BT market
Self management and professionality
Identifying customer needs
Technical organization of the customer site
Influencing the customers expectation
Indentifying customers’ needs and build a customer selling...
Problem solving and decision making in groups
Decision making: Implementation and evaluation
Attendance management guidance for line managers
Decision making (HARVARD)
3 Day MBA
Cross-selling in a customer service call
BT Valuing ability
Keep it simple
Bright ideas
e six levels of listening
Communications skills web seminar
Communications in major deals
Communicating for results
Improving your cross-cultural communications
Understanding your customer
ITIL V3-Contiual service improvement fundamentals
ITIL V3-Service operation processes
ITIL V3-Service operation principles and functions
ITIL V3-Service transition processes and principles
ITIL V3-ITIL and the service lifecycle
ITIL V3-Service strategy fundamentals
ITIL V3-Service strategy processes
ITIL V3-Service design fundamentals
ITIL V3-Service design processes
Heart
Bottom line
Drive for results
Customer connected
Figure 11.4 Similarity scores between the listed learning materials and heart, bottom line, drive for results, and customer-
connected company values. Scores are calculated with Dijkstra’s algorithm using SDT and GRASIM.
Adding Semantics to Decision Tables ◾  315
Table11.8 Lexon Table: Cross-Selling in Customer Service Call
Head term Role Co-role Tail term
Company has characteristic
of
is characteristic
of
flexibility
Flexibility is a Is key factor
key factor is a Is factor
customer call
centre
is part of has part company
CSA is a Is person
Person handle is handled by call
Person help is helped by customer
Person provide is provided by solution
Solution has characteristic
of
is characteristic
of
efficiency
Solution has characteristic
of
is characteristic
of
effectiveness
Person cross-sell is crossed-sold
by
product
Person prepare for is prepared by cross-selling call
Cross-selling call is a Is call
Table11.9 Lexon Table: Straightforward
Head term Role Co-role Tail term
Person has characteristics is characteristics of simplicity
Person has characteristics is characteristics of clarity
Employee has characteristics is characteristics of simplicity
Employee has characteristics is characteristics of clarity
316 ◾  Yan Tang, and Robert Meersman, and Jan Vanthienen
ELSE Low Boundary Bias = Similarity Score Low Boundary
IF (Similarity Score <High Boundary)
THEN High Boundary Bias = High Boundary-Similarity Score
ELESE High Boundary Bias = Similarity Score – High Boundary
For instance, if the similarity score for relevance level 4 is 0.45, the low boundary
bias is 0.6 – 0.45 = 0.15 and the high boundary bias is 0.8 0.45 = 0.35. e bias
is the smaller of 0.15 and 0.35: 0.15. We say a similarity score is satised if its bias
is smaller than the calibration. A score is not really satised if its bias is smaller than
twice the calibration; it falls in the neighbor similarity score ranges. If a similarity
score does not match any of the above requirements, we say that it is completely
unsatised.
e total numbers of completely satised, satised, not really satised, and
completely unsatised scores are 32, 81, 55 and 33, respectively. Figure11.5 shows
the evaluation results.
Based on our observations, the factors that aect the satisfaction rate are as follows:
Table11.10 Interpretation of Relevance Scores Provided by
Users
Relevance Level Low Boundary (>) High Boundary (≤)
1 0 0.146
2 0.146 0.292
3 0.292 0.438
4 0.438 0.584
5 0.584 1
27%
GRASIMsimilarityscoresvs.user
expectedscores(3),λ =0.5
16%
40%
17%
Completelysatisfied
Satisfied
Notreallysatisfied
Completelyunsatisfied
Figure 11.5 GRASIM evaluation results.
Adding Semantics to Decision Tables ◾  317
Weights on the graph arcs stored in SDT. e ontology engineers must adjust
the weights and thus need to know how to assign meaningful weights based
on ontological commitments.
e value of λ in GRASIM. Our ontology graph is directed. A lexon has forward
and backward directions. e λ value is for balancing the results calculated from
two directions. We observe that this value does not greatly aect the nal simi-
larity score if the weights on the arcs in two directions are well balanced. e
value of λ signicantly aects the score if the weights are not well balanced.
Structure of the ontology. As GRASIM uses shortest path values to calculate
similarity scores, the scores will more likely increase if more arcs (relations
between concepts) are introduced.
Two annotation sets. e annotation sets also aect the nal similarity scores;
they are two subgraphs in the ontology graph. When these two annotation sets
almost fully overlap, the similarity score is very high. When they are completely
disparate, the similarity score depends heavily on the shortest distance between
the two graphs. If the shortest paths to all the nodes from these two subgraphs
are small, the score is high. If they are large, the similarity score is low.
Expertise level. Suppose we have expert A and expert B. Expert A is the
domain expert who provides descriptions for the source materials used to cre-
ate the ontology and annotation sets. Expert B is the evaluator who provides
the expected similarity scores. e dierences of the understandings of expert
A and expert B of the competency objects will aect the evaluation results.
If all the above factors are well analyzed, a very good evaluation will result and
GRASIM will work properly. How to adjust these factors in a continuously evolv-
ing evaluation environment presents an interesting future project.
11.5 Discussion
Knowledge engineers (including ontology engineers) are responsible for analyzing
the raw materials provided by end users, helping them to formalize ontologies, and
conguring the parameters in the matcher using SDTs. End users (including testers
and evaluators) are considered nontechnical domain experts who are responsible
for providing domain knowledge in documents. ey also help test and evaluate
matches.
Knowledge engineers annotate the learning materials and company values. In
particular, they ask the company trainers (as end users) to provide textual descrip-
tion for learning component materials, and the HR manager to provide textual
descriptions for company values. After the materials are annotated within the
domain ontologies and the decision rules are properly presented in SDTs, tests are
run; the results are illustrated in the next subsection.
318 ◾  Yan Tang, and Robert Meersman, and Jan Vanthienen
11.5.1 Applying Ontology Engineering
Technologies to Decision Tables
Many researchers have dealt with semantics using decision tables. Figure11.6 shows
the Alltheweb* statistics showing the numbers of online documents that contain
discussions of both semantics and decision tables. In 2000, the number of online
documents on semantics and decision tables was 30,000; it increased to 563,000 in
2008more than 18 times the 2000 total. Apparently, decision tables and semantics
continue to attract more research and public attention. We collected statistical data
from Altavista
and Springer publications
as well and made similar conclusions.
Statistical data is collected based on keywords. e technologies of sup-
porting semantics in old publications are not the ones used for OE despite the
notion of semantics that emerged earlier than the use of OE, to model seman-
tics in computer science. Spoken language (ca. 700,000 ) was one of the rst
key developments in semantics history. According to McComb (2004), spoken
language, written language (ca. 20,000 ), the Golden Age of ancient Greece
(ca. 400 ), the enlightenment (ca. 1700 ), pragmatism (ca. 1870), linguistic
advances (1930), and articial intelligence (1960) are the milestones in the his-
tory of semantics.
*
http://www.alltheweb.com/—this search engine is owned by Yahoo! Inc.
Altavista is a Google-like Internet search engine. Users can use http://www.altavista.com/ to
search online documents.
Springer Verlag is an international publisher based in Heidelberg, Germany. It publishes sci-
entic texts, academic reference books, conference proceedings, and peer-reviewed journals in
many scientic elds, e.g., computer science, mathematics, and medicine.
600000
Alltheweb Online Document Search (data collected on 03 March 2009)
500000
400000
300000
200000
100000
0
2000 2001 2002 2003 2004 2005 2006 2007 2008
Figure 11.6 Number of online documents of decision tables along with seman-
tics, 2000–2009.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset