Cooperative Construction Of Consensual Knowledge

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Nature and Scope of Ontologies

Humera Fiaz (U0671678)

A dissertation submitted to the University Of Huddersfield in accordance with the requirements of the degree BSc Computing Science

University Of Huddersfield

April 2010

Abstract

[100 - 200 words]

include purpose, method, aim, scope, results, conclusions and recommendations

The purpose of this project was to create an ontology to represent the domain of courses in the School of Informatics. This is as the expressive and formal semantics of an ontology increases the value of existing data and allows data to be shared and reused across applications.

This was achieved by adopting the UPON methodology and created using the software Protégé.

Table of Contents

Abstract 2

List of Tables 5

List of Figures 5

1. Introduction 6

2. Background Research 7

2.1 History 7

2.2 Definition 7

2.3 Methodologies 8

2.3.1 Cyc Project 9

2.3.2 Uschold and King 10

2.3.3 Gruninger and Fox 11

2.3.4 KACTUS 12

2.3.5 SENSUS 13

2.3.6 METHONTOLOGY 14

2.3.7 On-To-Knowledge 15

2.3.8 CO4 (Cooperative construction of Consensual Knowledge Base) 17

2.3.9 UPON (Unified Process for Ontology) 17

2.4 Tools 19

2.4.1 Ontolingua Server 19

2.4.2 Ontosaurus 20

2.4.3 Tadzebao and WebOnto 20

2.4.4 Protégé 21

2.4.5 WebOde 21

2.4.6 OntoEdit 22

2.4.7 OILEd 23

2.4.8 DUET 23

3. Methodology 23

3.1 Choice of methodology to be adopted 23

3.1.1 Degree of dependency of the ontology, with the application using it 24

3.1.2 Differences between methodology and IEEE 1074 -1995 24

3.2 Choice of Tool to create ontology 26

4. Requirements Workflow 27

4.1 Domain of Interest 27

4.2 Business Purpose 27

4.3 Writing Storyboards 28

4.4 Application Lexicon 28

28

4.5 Competency Questions 28

4.6 Modeling the related use cases 29

4.6.1 Use case for searching the ontology 29

4.6.2 Use case for adding and removing elements from the ontology 30

5. Analysis Workflow 31

5.1 Acquiring domain resources to build a domain lexicon 31

5.2 Building the reference lexicon 32

5.3 Modeling the application scenario using UML diagrams 32

5.3.1 Activity Diagram to search the ontology 33

5.3.2 Activity Diagram of removing elements from the ontology 34

5.3.3 Activity Diagram of adding elements in the ontology 35

5.3.4 Class Diagram of DoI Courses Ontology 36

5.4 Building the reference glossary 37

6. Design Workflow 38

6.1 Modeling concepts 38

6.1.1 Business Actor 38

6.1.2 Business Object 39

6.1.3 Business Process 39

6.1.4 Message 39

6.1.5 Attribute 39

6.2 Modeling concept hierarchies & domain specific relationships 40

8. Test Workflow 47

8.1 Syntactic Quality 47

8.2 Semantic Quality 47

8.3 Pragmatic Quality 47

12.4 Social Quality 48

9. Evaluation of Product 49

10. Evaluation of Project 49

11. Conclusion 49

12. Bibliography 49

13. Glossary 52

DoI – Department of Informatics 52

List of Tables

Table 1 Provision of methodology processes in the IEEE 1074-1995 standard 26

Table 2 Application Lexicon for DoI Courses Ontology 28

Table 3 Potential concepts and relations of DoI Courses Ontology 32

Table 4 Reference Lexicon of DoI Courses Ontology 32

Table 4 Reference glossary of DOI Courses Ontology 37

Table 5 Business Processes of DOI Courses Ontology 39

Table 6 Attributes of DOI Courses Ontology 39

List of Figures

Figure 1. The UPON framework (De Nicola et al, 2009) 19

Figure 3 Use case diagram for searching the DoI Courses Ontology 30

Figure 4 Use case diagram for adding and removing elements from the DoI Courses Ontology 30

Figure 5 Activity diagram of searching DoI Courses Ontology 33

Figure 6 Activity diagram of removing elements from the DoI Courses Ontology 34

Figure 7 Activity diagram of adding elements to the DoI Courses Ontology 36

Figure 8 Class diagram of DoI Courses Ontology 36

Figure 9 Class Diagram showing relationships in DOI Courses Ontology 41

7. Implementation Workflow 41

Figure 10 DoI Courses Ontology Diagram 42

Figure 11 DoI Courses Ontology Attributes 42

Figure 12 DoI Courses Ontology Object Properties 43

Figure 13 DoI Courses Ontology Instances 44

Figure 14 DoI Courses Ontology Properties 45

Figure 15 An Owl fragment of DoI Courses Ontology 46

1. Introduction

The courses available in the Department of Informatics (DoI), in the School of Computing and Engineering, can currently be found in the University of Huddersfield prospectus. This is available online on the University of Huddersfield website and as a hard copy. This is an inadequate means to store and manage data effectively as it does not allow data to be shared and reused across applications.

This problem can be overcome by an ontology as this will allow data to be exported, queried and unified across systems. An ontology has benefits over a relational database as it preserves the semantics of the data and reduces the barriers for data exchange and integration.

The finished product will include the ontology of courses available in the DoI and a report detailing how ontologies are created. The product will provide an improved system that will benefit lecturers, students, administrative staff and department heads in the University of Huddersfield. It will also provide benefits to potential users who want to learn how to create an ontology or reuse the finished product.

The objectives of this project include researching ontology methodologies and tools for building ontologies and adopting a methodology and tool for the development of Department of Informatics Courses Ontology.

2. Background Research

2.1 History

Ontologies are data structures borne out of a branch of philosophy known as metaphysics. In philosophy ontology deals with the nature of reality and the essence of things. Computer scientists adopted this term in the early 1980’s to refer to a theory of a modeled world. Whilst philosophers debated possible methods for building ontologies computing scientists were building large and robust ontologies with little debate over how they were built.

Researchers recognised that capturing knowledge played an important role in building large and powerful systems. They argued they could create new ontologies as computational models. In the early 1990’s definitions of an ontology as a term in computer science emerged.

2.2 Definition

An ontology can be described as a formal representation within a domain. It describes a set of concepts and the relationships between these concepts. McCarthy first recognised the overlap in work done in philosophical ontology and the activity of building logical theories in Artificial Intelligence. He established that to build an ontology we must first "list everything" (McCarthy, 1980). Later Gruber described an ontology as "a formal, explicit specification of a shared conceptualization" (Gruber, 1993). While Guarino argued, that "an ontology is a logical theory accounting for the intended meaning of a formal vocabulary" (Guarino, 1998).

Since ontologies are used in many different disciplines such as e-commerce, natural language processing, knowledge management and the semantic web, Uschold and Jasper provided a new definition of the word ontology to popularise it in disciplines. "An ontology may take a variety of forms, but it will necessarily include a vocabulary of terms and have some specification of their meaning. This includes definitions and an indication of how concepts are inter-related which collectively impose a structure on the domain and constrains the possible interpretations of terms" (Uschold and Jasper 1999).

Ontologies allow us to clarify the structure of knowledge. It forms the heart of any system of knowledge representation of a given domain providing a vocabulary for representing knowledge.

To create an ontology a series of methodologies have been developed. These methodologies provide guidelines for building new ontologies, enrich existing ones and acquire knowledge for some task. In the next section I will present these methodologies. Tools

2.3 Methodologies

The Cyc Project (Leha and Guha, 1990) was one of the first methodologies developed as part of Microelectronics and Computer Technology Corporation. Some years later Uschold and King developed the Enterprise ontology (1995) at the University of Edinburgh. Uschold and Gruninger, proposed a methodology based on the TOVE project which was developed at the University of Toronto(1996).

A further approach was adopted by Amaya Berneras and her colleagues as part of the Espirit KACTUS project (KACTUS, 1996). The METHONTOLOGY methodology appeared around the same time and was extended in further papers (Lopez et al, 1999). The SENSUS ontology was proposed in 1997 (Swartout et al, 1997).

Then some years later the On–To-Knowledge methodology appeared (Staab et al, 2001). The most recent methodology developed is UPON which is a software approach to ontology building (De Nicola et al, 2009).

2.3.1 Cyc Project

The Cyc project was started in 1984 by Douglas Lenat. The project comprised of a knowledge base containing over 1 million human defined assertions or rules. These rules are in the language CycL, which is based on predicate calculus. The knowledge base is divided into collections of concepts and facts from an area of knowledge, known as microtheories (Mt) and organised in a hierarchy. It has three phases as follows (Lenat and Guba 1990):

1. Manual coding of articles and pieces of knowledge - this is carried out by hand where knowledge is acquired in 3 ways:

Encoding the knowledge required to understand books and newspapers – searching and representing the underlying knowledge assumed readers already possess

Examination of articles that are unbelievable – study the rationale that makes some articles unbelievable

Identification of questions that "anyone" should be able to answer by having just read the text

2. Knowledge coding aided by tools using the knowledge already stored in Cyc - a search is conducted to acquire new knowledge by using analysing natural language machine tools.

3. Knowledge codification performed by tools using knowledge already stored in the Cyc - most of the work in this phase is delegated to tools whilst users explain the difficult parts of the text.

Two activities are performed in all of the 3 previous phases:

Development of a knowledge representation and top level ontology containing the most abstract concepts

Representation of the knowledge of different domains using such primitives.

2.3.2 Uschold and King

Uschold and King (1995) proposed some guidelines based on their experience of creating an ontology as part of the Enterprise project. This was developed by the Artificial Intelligence Applications Institute at the University of Edinburgh with its partners IBM, Lloyds register, Logica UK Limited and Unilever. They proposed the following phases:

1. Identify the purpose and scope - decide what domain the ontology will cover, the intended use of the ontology and determine who will use and maintain the ontology.

2. Build the ontology - this is broken into 3 activities:

Ontology Capture

identify key concepts and relationships in the domain

produce textual definitions for such concepts and relationships

identify terms that refer to such concepts and relationships

Coding – this involves the following 2 tasks:

Committing to basic terms that will be used to specify the ontology

Writing the code

Integrating existing ontologies – refers to how and whether to use ontologies that already exist.

Textual definitions are built by referring to notions of class and relations. Three strategies are proposed for identifying the main concepts in the ontology.

Top-down development - general concepts in the domain are defined followed by subsequent specialisation of the concepts.

A bottom-up development – most specific concepts are defined and then generalised into more general concepts.

A combination development - combines the top-down and bottom-up approaches. In this process the most important concepts are identified first and then generalised and specialised into other concepts.

3. Evaluate and Document - guidelines for documenting the ontology are established

2.3.3 Gruninger and Fox

Gruninger and Fox (1995) propose a methodology that is inspired by the development of knowledge based systems using first order logic. It is based on the experience of the TOVE project which was developed at the University of Toronto. This is a formal methodology that can be used as a guide to transform informal scenarios in computable models. The phases involved in this methodology are as follows:

1. Capture of motivating scenarios - motivating scenarios are story problems or examples which have not been addressed by existing ontologies. This phase provides a set of possible solutions which provide an informal semantic for the objects and relations

2. Formulation of informal competency questions - informal competency questions are scenarios from phase 1 in the form of questions. These questions are stratified and the response to one question can be used to answer more generic questions.

3. Specification of the terminology - the following steps will be undertaken:

Getting informal terminology – terms are extracted from competency questions and used to specify terminology in a formal language

Specification of formal terminology – terminology is specified using an ontology language

Formulation of formal competency question using terms from the ontology – competency questions are defined formally

Specification of axioms and definitions for the terminology – axioms specify the definition of terms and constraints providing the representation of formal competency questions. These define the semantics, or meaning of terms

Establish conditions for characterising the completeness of the ontology – define conditions in which the answers to the questions are complete

2.3.4 KACTUS

A further approach has been proposed by Amaya Berneras and her colleagues, inside the Espirit KACTUS project (KACTUS, 1996). This approach is considered by applications development. Each time an application is built, the ontology representing the knowledge required for the application is built by reusing others or integrated into existing ontologies. The following steps are taken:

1. Specification of the application - provides a context and view that the application tries to model.

2. Preliminary design based on ontological categories - involves searching ontologies developed for other applications, which are refined and extended for use in the new application.

3. Ontology refinement and structuring - ensure that the modules are not dependant on each other and as coherent as possible.

2.3.5 SENSUS

This method proposes to link domain specific terms to the huge ontology and to prune those terms that are irrelevant for the new ontology we wish to build. The result of this process is the skeleton of new technology (Swartout et al, 1997). The following processes should be taken.

1. Identify seed terms - key terms that are relevant to a particular domain are identified.

2. Manually link the seed terms to SENSUS - terms are linked manually to a broad coverage ontology.

3. Add paths to the root - all concepts from the seed terms to the root of SENSUS are included.

4. Add new domain terms - add terms that could be relevant and that have not yet appeared. Processes 2 and 3 are repeated until no term is missing.

5. To add complete subtrees - for those nodes that a large number of paths through them, the entire subtree under the node is added.

2.3.6 METHONTOLOGY

This was developed within the laboratory of Artificial intelligence at the Polytechnic University of Madrid (Lopez et al, 1999). The framework enables ontologies to be constructed at the knowledge level and includes the following phases:

1. Ontology Development Process - activities to be accomplished when building ontologies. These include:

Project Management Activities

Planning – identifies tasks to be performed and how long and what resources will be required for completion

Control – guarantees that planned tasks are completed as intended

Quality Assurance – assures that each product output is satisfactory

Development-Oriented Activities

Specification – states the purpose of the ontology, the intended uses and who the intended users are.

Conceptualisation – structures the domain knowledge as models at the knowledge level.

Formalisation – transforms the conceptual model into a formal model

Implementation – computable models are built in a computational language

Maintenance – update and corrects the ontology

Support Activities

These are performed in parallel with development-oriented activities. They include:

Knowledge Acquisition – knowledge of a given domain is attained

Evaluation – technical judgment of ontologies, software environments and documentations are formed during each phase and between phases of each life cycle

Integration – integrating existing ontologies.

Documentation – details of completed phases and products generated

Configuration Management – all versions of documentation, software and ontology code is recorded.

2. Ontology Life Cycle - identifies stages of ontology and describes activities at each stage and how they are related.

2.3.7 On-To-Knowledge

The aim of this project was to apply ontologies to electronically available information for improving the quality of knowledge management in large distributed organisations (Staab et al, 2001). This approach takes into account how the ontology will be used in further applications, it is therefore application dependant. The following phases are proposed in this methodology:

1. Feasibility Study

2. Kickoff - result of this phase is a requirements specification that describes the following:

Domain and goal of the ontology

Design guidelines

Available knowledge sources

Potential users and use cases as well as applications supported by ontology

Competency questions can be used to elaborate requirement specification document. Developers should look for potentially reusable ontologies already developed. In this phase a draft version is built.

3. Refinement - in this phase a mature and application oriented methodology is built. This phase is divided into 2 activities.

Knowledge elicitation with domain experts – first draft obtained in phase 2 is refined by means of interaction with experts in the domain during which axioms are identified and modeled. Terms and concepts are then mapped.

Fomalisation – Ontology is implemented using an ontology language, Choice of language is selected according to the specific requirements of application.

4. Evaluation - during this phase two activities are carried out.

Checking the requirements and competency questions – check whether the ontology satisfies the requirements and "can answer" the competency questions.

Testing the ontology in the target application environment

5. Process - need to clarify who is responsible for maintenance and how this should be carried out

2.3.8 CO4 (Cooperative construction of Consensual Knowledge Base)

The CO4 system represents formal knowledge by means of objects and tasks which is comprised of a set of knowledge bases (KB’s). Knowledge bases are organised in a tree whose leaves are user KB’s and whose intermediate nodes are called group KB’s. A protocol for integrating knowledge helps to enforce the content of the KB’s.

Before being introduced in a consensual KB, the community must accept the submitted knowledge. This requires submitting knowledge to the base; participants review the knowledge and amend or accept it accordingly. This ensures that the knowledge stored in a consensual KB can be safely and confidently used by anyone. Knowledge consensus is achieved by the exchange of messages between users (Euzenat, 1996).

2.3.9 UPON (Unified Process for Ontology)

UPON is based on a software engineering process, the Unified Process (UP). It is mainly for large scale domain ontologies but provides useful guidelines for small ontologies. UPON consists of cycles, phases, iterations and workflows and is distinguishable by its use case driven, iterative and incremental nature (De Nicola et al, 2009). The following steps are proposed in this methodology:

1. Inception - capturing requirements and performing some conceptual analysis

2. Elaboration - analysis is performed and fundamental concepts are identified

3. Construction - additional analysis aimed at identifying concepts overlooked in the previous phases to be further added to the ontology.

4. Transition - testing is performed and final ontology released. In parallel the material necessary to start new cycle that will produce the next version of the ontology is collected.

At each iteration different workflows come into play and a richer more and more complete version of the ontology is produced.

1. Requirements Workflow - specifying the semantic needs and user view of the knowledge to be encoded in the ontology.

2. Analysis workflow - refinement and structuring of the ontology requirements identified in previous workflow.

3. Design workflow - refinement of entities, actors and processes identified in the analysis workflow including Identification of their relationships.

4. Implementation workflow - selecting and encoding the ontology in a formal language. As a result of standardisation efforts OWL is currently the main candidate for encoding the ontology to be used on the semantic web.

5. Test workflow - this workflow is to verify the semantic and pragmatic quality of the ontology.

The diagram (figure 1) illustrates the UPON methodology.

Figure 1. The UPON framework (De Nicola et al, 2009)

2.4 Tools

Tools for building ontologies are aimed at providing support for the ontology development process and ontology reuse. In this section I will present the main tools available.

2.4.1 Ontolingua Server

The Ontolingua Server was developed in the Knowledge systems Laboratory at Stanford University at the beginning of the 1990’s. It was created to be an effective easy to use tool for creating, evaluating, accessing and maintaining reusable ontologies (Farquhar et al, 1996).

A library of ontologies can be accessed via the server. It also allows the creation of new libraries and existing ontologies to be modified. There are several mode of interaction with the server.

Remote distributed groups – ontologies can be maintained, built and browsed using web browsers. Shared sessions allow multiple users to work simultaneously on an ontology

Remote applications – ontologies stored at the server can be queried and modified over the Internet.

Ontology can be translated into a specific format – can translate into ten different applications including CLIPS, LOOM, or Prolog.

2.4.2 Ontosaurus

Ontosaurus was built at Information Science Institute at the University of South Carolina by Ramesh Patil and Tom Russ in 1996. It was developed for browsing and editing ontologies and knowledge bases by means of the web. Ontosaurus works with Loom knowledge bases but other knowledge representations can also be supported (Swartout et al, 2001).

2.4.3 Tadzebao and WebOnto

The Knowledge Media Institute at the Open University developed Tadzebao and WebOnto in 1997 (Domingue, 1998). These two systems were developed to address the deficiencies of earlier approaches of the Ontolingua Server and Ontosaurus.

Tadzebao is a tool which supports synchronous and asynchronous discussions on ontologies. Asynchronous discussions allow an ontology design team that is spread over large time zones to interact.

WebOnto was designed to complement Tadzebao and is a tool for collaboratively browsing and editing ontologies. WebOnto uses OCML, an operational knowledge modeling language to represent expressions.

The Ontolingua Server, Ontosaurus, Tadzebao and WebOnto were oriented to research activities and not built as isolated tools. They were created to ease the browsing and editing of ontologies in their specified languages (Ontolingua, LOOM and OCML respectively). In the last few years new generations of ontology engineering environments have been developed. The models underlying these environments are language independent. These include Protégé, WebOnto and OntoEdit.

2.4.4 Protégé

Protégé was developed by the Stanford Medical Informatics at Stanford University. It is a free, open source platform that provides a suite of tools to create ontologies using various formats (Noy et al. 2001). It can be customised to provide support for creating knowledge models and entering data. Protégé can be extended using plug-ins and API (Application Programming Interface) for building knowledge based tools.

Protégé supports 2 ways of modeling ontologies:

Protégé – Frames editor – users are able to build and populate ontologies which should conform to the Open Knowledge Base Connectivity (OKBC) protocol

Protégé – OWL editor – enables users to build ontologies using the Web Ontology Language (OWL)

2.4.5 WebOde

WebODE is a tool for modeling knowledge using ontologies. It was developed in the Artificial Intelligence Lab at the Technical University of Madrid (Arpiret et al, 2002). It is based on an ontology methodology named METHONTOLOGY and is the counterpart for ODE (Ontology Design Environment).

WebODE implementation can be carried out with technologies such as Java, RMI, COBRA or XML. It is built using a 3 tier model

User Interface – interoperates with other applications via a web browser and implemented using HTML, CSS (Cascading Style Sheets) and XML (extended Mark-Up language)

Business Logic – consisting of 2 layers:

Sub Logic – direct access by means of an API

Presentation – responsible for generating content in the users browser and handling user requests from client

Data – ontologies are stored in a relational database. This database is accessed using the JDBC (Java Database Connectivity).

WebOde ontologies are built at the conceptual level and later automatically translated into different languages such as Ontolingua, Prolog, Loom, CARIN and Flogic. Ontologies can be exported using XML and imported using XML and WebODE servers.

2.4.6 OntoEdit

OntoEdit was developed by AIFB in Karlsruhe University as an ontological engineering environment. It supports the development and maintenance of ontologies by utilising graphical menus (Sure et al, 2002).

The conceptual model of an ontology is internally stored using a powerful ontology model which can be mapped onto different representation languages. Transformation modules allow the ontology to be translated from its XML based format to more specific formats.

OntoEdit uses a flexible and expandable plug in framework that allows the functionalities of OntoEdit to be easily extended.

With the emergence of the Semantic Web the development of DAML+ OIL and RDF(S) ontologies have grown rapidly. There are several tools that create DAML + OIL ontologies these include OILEd AND DUET.

2.4.7 OILEd

OILEd was developed by Sean Bechhofer at the University of Manchester (Bechhofer et al, 2001). It is a freeware ontology editor for building ontologies using the Ontology Interchange Language (OIL).

OILEd allows the consistency of ontologies to be checked using the FaCT (Fast Classification of Terminologies) reasoner. It allows ontologies to be exported in a number of formats including OIL-RDF (Resource Description Framework) and DARPA Agent Mark Up Language. It is best suited to learn how to build ontologies which can then be checked and enriched using the FaCT reasoner as opposed to the development of large scale ontologies.

2.4.8 DUET

DUET is being developed by AT&T Government Solution Advanced Systems Group (Kogut, et al, 2002). It offers a UML visualisation and authoring environment for DAML + OIL ontologies. It has been integrated as a plug in for the Rational Rose Suite. DUET represents only UML static constructs available on class diagrams. Valid UML diagrams will produce valid DAML + OIL ontologies and vice versa. It supports multiuser capabilities and multiple ontologies may be imported for comparison and merging.

3. Methodology

3.1 Choice of methodology to be adopted

To make an appropriate choice for the methodology to be adopted in the development of the DoIC ontology I compared the methodologies; Cyc, Uschold and King, Gruninger & Fox, KACTUS, SENSUS, METHONTOLOGY, On-to-Knowledge and UPON using the criteria:

Degree of dependency of the ontology, with the application using it

Differences between methodology and IEEE 1074 - 1995

3.1.1 Degree of dependency of the ontology, with the application using it

Considering this criterion the methodologies can be classified into the following types:

Application dependant – ontologies are built on the basis of the applications that use them

Application semi-dependant – possible scenarios of ontology are identified in the specification stage

Application independent – process is totally independent of the users of the ontology in applications.

Using this criterion the KACTUS project and On-To-Knowledge methodology are application dependent since they are built on the basis of a given application. The Gruninger and Fox and Sensus methodology are semi-application-dependant. Cyc, Uschold and King, METHONTOLOGY and UPON are application independent since the process of ontology development is totally independent of the users of the ontology.

3.1.2 Differences between methodology and IEEE 1074 -1995

Fernandez (2002) presents a framework based on IEEE 1074-1995 standard. The standard describes the software development process, the activities to be carried out and the techniques that can be used to develop software. The framework evaluates different ontology building processes. It distinguishes the following 3 processes:

Project Management Process – concerning the creating of a project management framework for the entire ontology life cycle. This includes project initiation, project monitoring and control.

Ontology Development Process – this is divided into the following 3 processes

Pre development – includes environment and feasibility study

Development – includes requirements, design and implementation

Post development – includes installation, operation, maintenance and retirement of an ontology

Integral Process – These are needed to successfully complete software project activities. These include processes of knowledge acquisition, evaluation, configuration management, documentation and training.

The support each methodology provides to the processes in the IEEE 1074-1995 standard is shown in table 1.

From Table 1 it can be seen that most methodologies support the development process whilst only On-To-Knowledge and UPON provide full and partial support, respectively, to the pre-development process. Gruninger and Fox and METHONTOLOGY provide partial support to project management activities whereas On-To-Knowledge provides full support. Uschold and King, METHONTOLOGY and UPON also provide full support to Integral processes.

Table 1 Provision of methodology processes in the IEEE 1074-1995 standard

Project Management Process

Ontology Development Process

Integral Process

Pre-

development

Development

Post-development

Cyc

-

-

+

-

-

Uschold & King

-

-

P

-

+

Gruninger & Fox

P

-

+

-

-

KACTUS

-

-

+

-

-

SENSUS

-

-

+

-

-

METHONTOLOGY

P

-

+

-

+

On-To-Knowledge

+

+

+

-

-

UPON

-

P

+

-

+

- UNSUPPORTED, + SUPPORTED, P PARTIALLY SUPPORTED

When comparing these methodologies to the IEEE standard none are fully mature although On-To-Knowledge methodology is the most mature fully supporting 3 out of the 5 processes. This is followed by UPON and METHONTOLOGY which fully supports two processes and partially supports one.

I will be adopting the UPON methodology to develop the DoIC Ontology as this uses the UML modeling language which I have familiarity in using during my first and second years of study. This methodology also uses an iterative approach which will allow be to refine the ontology to produce a more accurate model of the domain of courses available in the Informatics department.

3.2 Choice of Tool to create ontology

I will use Protégé 2000 to create this ontology as it is a free, open source platform that provides tools to create ontologies.

4. Requirements Workflow

The first step in the UPON methodology is the requirements workflow. In this workflow the semantic needs and user view of the knowledge to be encoded in the ontology is identified. It incorporates the following steps:

Determine the domain of interest – focus on the appropriate knowledge of the model

Define business purpose – the reason for having an ontology and identify

Intended users

Intended uses

Writing storyboards – sketches outlining the sequence of activities that place in a particular scenario

Create an application lexicon – created by collecting terminology from application documents

Identify competency questions – questions an ontology must be able to answer

Modeling the related use cases – competency questions are addressed using use case models. Use cases correspond to paths through the ontology and aim to answer competency questions.

4.1 Domain of Interest

The ontology is modeled for the domain DoI courses which is part of the School of Computing and Engineering in the University of Huddersfield.

4.2 Business Purpose

The development of the DoI Courses Ontology is to provide a model of the courses domain within the DoI that could be used within the domain of University of Huddersfield. The ontology serves as a base for searching course specifications.

4.3 Writing Storyboards

In the DoI Courses Ontology the sequence of activities that take place are simple scenarios therefore no story boards are required

4.4 Application Lexicon

The application lexicon was created by searching course specification documents that were available on line. The terminology I collected is shown in Table 2.

Table 2 Application Lexicon for DoI Courses Ontology

degree course

description

Course code

level

credit points

compulsory course

optional course

learning outcomes

degree awarded

duration

number of applicants

entry requirements

selection criteria

course content

undergraduate degrees

post graduate degrees

school

department

selection policy

qualification

module/unit

credits

contact details

subject areas

core requirements

4.5 Competency Questions

I have identified a series of competency questions which my ontology must be able to answer. These questions will be used in the test workflow to evaluate the ontology.

What is the name of the course

Is it a full time or part time course

Is the course to be studied in Huddersfield, Oldham or Barnsley

How many places are available on the course

What degree classification will I obtain when I complete the course

What modules will I study in each year

Are the modules core or option modules

4.6 Modeling the related use cases

I have modeled use cases which will allow users to search the ontology to find answers to the competency questions mentioned above. I have also modeled use cases to add and remove elements in the ontology.

4.6.1 Use case for searching the ontology

The use case diagram (Figure 3) depicts how the ontology can be searched. Lecturers, Students, Administrative staff and department heads can search the ontology by course name, module name, degree classification and place of study.

Figure 3 Use case diagram for searching the DoI Courses Ontology

4.6.2 Use case for adding and removing elements from the ontology

The use case diagram (Figure 4) depicts how elements can be added or removed from the ontology. Lecturers, Administrative staff and department heads can add or remove courses, modules, degree classifications and places of study.

Figure 4 Use case diagram for adding and removing elements from the DoI Courses Ontology

5. Analysis Workflow

In the analysis workflow the requirements specified in the requirements workflow are refined and structured. This is done by completing the following steps:

Acquiring domain resources to build a domain lexicon – the domain lexicon is built by gathering terminology from documents such as reports, standards and glossaries

Building the reference lexicon – built by merging the application lexicon and domain lexicon

Modeling the application scenario using UML diagrams – adding activity and class diagrams to the use case diagrams created in the requirements workflow

Building the reference glossary – informal definitions are added to terms in the reference lexicon

5.1 Acquiring domain resources to build a domain lexicon

The domain lexicon is shown below. This has been created by collecting the terminology from the course specification documents which were available on-line in the University of Huddersfield website.

Course

Module

Classification

Length of study

Full time study

Part time study

Number of places

Place of study

Core modules

Option modules

This was then used to identify the potential concepts and relations (Table 3) that will be used to create the DoI ontology.

Table 3 Potential concepts and relations of DoI Courses Ontology

Potential Concepts

Potential Relations

Name of course

hasA

Course classification

isStudiedIn

Length of study

includes

Places available

willObtain

Place of study

Full/Part time study

Core/Option module

5.2 Building the reference lexicon

The reference lexicon is shown below which was created by merging the application and reference lexicons.

Table 4 Reference Lexicon of DoI Courses Ontology

degree course

undergraduate degree

course classification

postgraduate degree

duration

place of study

places available

contact details

full-time/part time study

core/option module

5.3 Modeling the application scenario using UML diagrams

The following activity diagrams model the activities involved in searching, adding and removing elements in the ontology.

5.3.1 Activity Diagram to search the ontology

Figure 5 Activity diagram of searching DoI Courses Ontology

5.3.2 Activity Diagram of removing elements from the ontology

Figure 6 Activity diagram of removing elements from the DoI Courses Ontology

5.3.3 Activity Diagram of adding elements in the ontology

Figure 7 Activity diagram of adding elements to the DoI Courses Ontology

5.3.4 Class Diagram of DoI Courses Ontology

The following class diagram (figure 6) shows the classes that will be required in the DoI Courses Ontology. This includes the abstract classes DegreeCourse and Module. It also includes the attributes of each class.

Figure 8 Class diagram of DoI Courses Ontology

5.4 Building the reference glossary

The reference glossary (table 4) has been created by adding definitions to terms from the reference lexicon.

Table 4 Reference glossary of DOI Courses Ontology

Terms

Definitions

degree course

name of a degree course

course classification

classification of degree that will be obtained on completion of a course

duration

time taken to complete a given course

places available

number of places available on a course

undergraduate degrees

courses available on an undergraduate degree

postgraduate degrees

courses available on a post-graduate degree

place of study

location where the degree is to be studied

contact details

contact details of the organisation where the course will be attended

Module

name of a module

core/option module

specifying whether a module is mandatory (core) or optional

full-time/part time study

specifying whether the course can be studied full or part time

6. Design Workflow

The aim of this workflow is to provide a structure to the set of entries in the reference glossary. This is done with attributes and axioms. The steps involved in this workflow are as follows:

Modeling concepts – each concept is characterised into one of the following primary categories

Business actor – able to activate, perform or monitor a business process

Business object – entity on which a business process operates

Business process – business activity aimed at fulfilling a business goal

Once the primary categories have been identified they are enriched by introducing complementary categories

Message – information exchanged during an interaction

Attribute – information structuring of a concept

Modeling concept hierarchies and domain specific relationships – in this phase concepts are organised into a hierarchy and formal relations are introduced. This is done by adopting a combination approach (Uschold and King, 1995)

6.1 Modeling concepts

The concepts identified in the reference glossary are categorised into either a business actor, business object or business process.

6.1.1 Business Actor

In the DoI Ontology the following actors have been identified; Lecturer, student, administrative staff and department head.

6.1.2 Business Object

In the DoI Ontology the following objects have been identified; Undergraduate course, Postgraduate course, Year of study, Degree classification, Place of study and Module.

6.1.3 Business Process

In the DoI Ontology 12 processes have been identified (table 5).

Table 5 Business Processes of DOI Courses Ontology

Search for a course

Add a course

Remove a course

Search a module

Add a module

Remove a module

Search a classification

Add a classification

Remove a classification

Search a place of study

Add a place of study

Remove a place of study

6.1.4 Message

In the DoI Ontology there are no interactions between processes therefore this categorisation is not required for this ontology.

6.1.5 Attribute

In the DoI Ontology 22 attributes have been identified (table 6).

Table 6 Attributes of DOI Courses Ontology

Place of Study

Course

Classification

Module

Name of organisation

Name of course

Name

Name of module

First line of address

Duration

Second line of address

Places Available

City

UCAS Code

Postcode

Sandwich course

Phone number

No of options taken in Year 1

No of options taken in Year 2

No of options taken in Year 3

No of options taken in Year 4

6.2 Modeling concept hierarchies & domain specific relationships

The following class diagram illustrates the relationships between classes in the ontology. It includes generalisation (i.e. Is A) and aggregate relationships (i.e. Part of).

The classes DegreeCourse, UndergraduateCourse and PostgraduateCourse and Foundation have a generalisation as UndergraduateCourse, PostgraduateCourse and Foundation are a type of DegreeCourse. The class Module has an aggregate relation with the class DegreeCourse as a modules are part of a degree course.

Figure 9 Class Diagram showing relationships in DOI Courses Ontology

7. Implementation Workflow

In the implementation workflow the ontology was encoded using the Protégé tool. Using Protégé I created the following classes; Degreecourse, Foundation UndergraduateCourse, PostgraduateCourse, Module, Place of study and Classification to create the asserted model of the ontology (Figure 10).

Figure 10 DoI Courses Ontology Diagram

Attributes were created in the Data Properties tab (Figure 11) and relationships in the Object Properties tab (Figure 12).

Figure 11 DoI Courses Ontology Attributes

Figure 12 DoI Courses Ontology Object Properties

Instances of Degree courses and modules were created in the Individuals tab (Figure 13). Each of the instances were then assigned properties (Figure 14). A type was assigned in the Description window which was one of the following classes: Degreecourse, Foundation UndergraduateCourse, PostgraduateCourse, Module, Place of study and Classification. Attributes were assigned under Data Property Assertions and relationships under Object Property Assertions. This produced the DoI Courses Ontology in OWL (Figure 15).

Figure 13 DoI Courses Ontology Instances

Figure 14 DoI Courses Ontology Properties

Figure 15 An Owl fragment of DoI Courses Ontology

8. Test Workflow

In this workflow the ontology is evaluated against the following characteristics:

Syntactic quality – measures the way the ontology is written

Semantic quality – ensures there are no contradictory concepts

Pragmatics quality – measures the usefulness of the ontology for users

Social quality – measures the number of ontologies that link to it and the number of times it is accessed

8.1 Syntactic Quality

The syntactic quality is checked in implementation workflow and guaranteed during the OWL coding.

8.2 Semantic Quality

The semantic quality of the ontology can be verified by checking the consistency which can be achieved by means of a reasoner or knowledge experts. Consistency requires the following:

Absence of contradiction

Correct use of modeling constructs

8.3 Pragmatic Quality

Pragmatic quality is related to the following features:

Fidelity – measures if the ontology fulfills the claims it makes

Relevance – verifies the implementation of the ontologies requirements. This can be achieved by testing if it’s possible to answer the competency questions identified in the requirements workflow

Completeness – verifies that the ontology satisfies the requirements and constraints of the problem. This can be done by checking if the goals have been fulfilled and the objectives defined in the requirements workflow have been reached.

12.4 Social Quality

The social quality of the ontology is checked only after publication by interaction with different teams.

9. Evaluation of Product

10. Evaluation of Project

11. Conclusion



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now