2
8 COMPUTER Published by the IEEE Computer Society 0018-9162/13/$31.00 © 2013 IEEE COMPUTING CONVERSATIONS Ian Horrocks: Standardizing OWL Charles Severance Ian Horrocks describes the early days of the Web Ontology Language (OWL) and the effort it took to standardize it and other languages. E arly research into artificial intelligence essentially boiled down to capturing “knowledge” and making it available to software in a format that would allow that software to behave more intelligently. For quite a long time, the syntax and format used to record or enter this knowl- edge into files was tied to the very programs that would read and use it. Research groups would typically develop tools and define a format to feed knowledge into them. The hope was to build reusable reason- ing tools that could function across many domains of knowledge by simply loading different knowledge sets into those systems. In the late 1990s and early 2000s, AI researchers realized that to maxi- mize the usefulness of their work in an increasingly networked environ- ment, they needed to standardize their ontology/knowledge represen- tation languages and move the focus from “yet another syntax to repre- sent knowledge” to software that could evolve and use the knowledge to exchange data between different applications. I recently interviewed Oxford University’s Ian Horrocks about how the various ontology efforts in the late 1990s were brought together, standardized, and normal- ized to produce the Web Ontology Language (OWL). You can view our full conversation at www.computer. org/computingconversations. EARLY DAYS Initially, those working in the AI field came from a very narrow area that was simply trying to capture and codify knowledge: My background had been in medi- cal informatics and developing what we now call “ontology languages” and reasoning systems, although we weren’t necessarily calling them ontologies back then. The first step toward develop- ing a standard ontology language was a series of small meetings in which people simply shared their best ideas and learned about other approaches to knowledge representation: In 1999, I went to an ontology-sharing day in Kaiserslautern and met people like Frank van Harmelen and Dieter Fensel who were also working in this area. I managed to convince them that description logic would be a good starting point. It’s a type of logic whose rationale is to formalize what we now call ontology languages. Compared to frame languages, description logic had more expres- sive power and a very clear formal semantics because it was basically a fragment of first-order logic. The small group developed a description logic-based syntax and language that they called OIL (On- tology Interchange Language) and published their approach (“OIL: An Ontology Infrastructure for the Semantic Web,” IEEE Intelligent Sys- tems, March/April 2001, pp. 38-45). As these small collaborative efforts were more broadly shared and pub- lished, more researchers became interested in the shared objective of growing the overall field through the use of standard ways to represent knowledge: We met people in the US like Peter- Patel-Schneider, Pat Hayes, Jim Hendler, and others working on the DAML [DARPA Agent Markup Language] program. We all decided that we were more or less trying to do the same thing, so why not pool our resources? Accordingly, we formed the rather grandly named “Joint US/EU ad hoc Agent Markup Language Committee” and produced DAML+OIL. It wasn’t really much dif- ferent from the OIL specification. The idea was to draw more stakeholders into the process so we could develop DAML+OIL into a standard. This was where the OWL effort and working group started.

Ian Horrocks: Standardizing OWL

  • Upload
    charles

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Ian Horrocks: Standardizing OWL

8 computer Published by the IEEE Computer Society 0018-9162/13/$31.00 © 2013 IEEE

COMPUTING CONVERSATIONS

Ian Horrocks: Standardizing OWLCharles Severance

Ian Horrocks describes the early days of the Web Ontology Language (OWL) and the effort it took to standardize it and other languages.

E arly research into artificial intelligence essentially boiled down to capturing “knowledge” and making

it available to software in a format that would allow that software to behave more intelligently. For quite a long time, the syntax and format used to record or enter this knowl-edge into files was tied to the very programs that would read and use it. Research groups would typically develop tools and define a format to feed knowledge into them. The hope was to build reusable reason-ing tools that could function across many domains of knowledge by simply loading different knowledge sets into those systems.

In the late 1990s and early 2000s, AI researchers realized that to maxi-mize the usefulness of their work in an increasingly networked environ-ment, they needed to standardize their ontology/knowledge represen-tation languages and move the focus from “yet another syntax to repre-sent knowledge” to software that could evolve and use the knowledge to exchange data between different applications.

I recently interviewed Oxford University’s Ian Horrocks about how the various ontology efforts in the late 1990s were brought together, standardized, and normal-ized to produce the Web Ontology

Language (OWL). You can view our full conversation at www.computer.org/computingconversations.

EARLY DAYSInitially, those working in the AI

field came from a very narrow area that was simply trying to capture and codify knowledge:

My background had been in medi-

cal informatics and developing what

we now call “ontology languages”

and reasoning systems, although

we weren’t necessarily calling them

ontologies back then.

The first step toward develop-ing a standard ontology language was a series of small meetings in which people simply shared their best ideas and learned about other approaches to knowledge representation:

In 1999, I went to an ontology-sharing

day in Kaiserslautern and met people

like Frank van Harmelen and Dieter

Fensel who were also working in

this area. I managed to convince

them that description logic would be

a good starting point. It’s a type of

logic whose rationale is to formalize

what we now call ontology languages.

Compared to frame languages,

description logic had more expres-

sive power and a very clear formal

semantics because it was basically a

fragment of first-order logic.

The small group developed a description logic-based syntax and language that they called OIL (On-tology Interchange Language) and published their approach (“OIL: An Ontology Infrastructure for the Semantic Web,” IEEE Intelligent Sys-tems, March/April 2001, pp. 38-45). As these small collaborative efforts were more broadly shared and pub-lished, more researchers became interested in the shared objective of growing the overall field through the use of standard ways to represent knowledge:

We met people in the US like Peter-

Patel-Schneider, Pat Hayes, Jim

Hendler, and others working on

the DAML [DARPA Agent Markup

Language] program. We all decided

that we were more or less trying to

do the same thing, so why not pool

our resources? Accordingly, we

formed the rather grandly named

“Joint US/EU ad hoc Agent Markup

Language Committee” and produced

DAML+OIL. It wasn’t really much dif-

ferent from the OIL specification. The

idea was to draw more stakeholders

into the process so we could develop

DAML+OIL into a standard. This was

where the OWL effort and working

group started.

Page 2: Ian Horrocks: Standardizing OWL

NoVemBer 2013 9

The process of evolving DAML+OIL into OWL took longer than we thought and involved a bigger change than we thought.

GROWING UP AND OUTWith EU and US researchers

finding that their approaches and goals were well aligned, they set about working with the W3C (World Wide Web Consortium) to produce a formal standard based on the DAML+OIL approach. They initially expected that with DAML+OIL well developed, it would be rather simple to wrap up the standardization effort in a short amount of time.

But as the group grew, more people learned about the work, became interested in the result-ing specification, and wanted to participate:

A whole new bunch of people joined

the party, in particular people from

the Web community such as Dan

Connolly and Sandro Hawke. They

had a whole load of concerns of their

own and things that were important

to them, such as integration and

compatibility with RDF [Resource

Description Framework], general

Web infrastructure, and existing

standards.

The Web Ontology (WebOnt) Working Group within the W3C formed in November 2001, just two years after the ontology-sharing day back in 1999. But in that brief period, many new people had come to the table, and these different stakehold-ers had needs to be addressed in the resulting specification:

The process of evolving DAML+OIL

into OWL took longer than we

thought and involved a bigger change

than we thought. It took a couple of

years in the end—and probably 10

years off my life.

It was interesting, and I learned

a lot as the language evolved. It was

mainly the syntax and the relation-

ship with RDF that changed. The

underlying logic didn’t change very

much, nor did the semantics of OWL

DL because it just flows from the logic.

Even though the WebOnt Work-ing Group needed to address the needs of myriad stakeholders, by December 2003, the standard was complete and published:

There were huge arguments, but in

the end, we managed to reach a com-

promise that everybody could sign off

on, some with more grumbling than

others.

With a standard language to rep-resent knowledge, the field started to come together and produce tools and data that could be shared across research projects and commercial efforts:

We had a standard KR [knowledge

representation] language that was

supported by lots of different groups

building infrastructure, and suddenly

applications people started to feel

more comfortable. It had always been

an issue that you were using a system

from the University of X and then the

relevant research project ended or the

research group got bored and went

off and did something else, leaving

you with no support. But with OWL,

you could use a standard language,

with tons of people developing an

ever-growing array of infrastructure

to support building, deploying, and

maintaining ontologies.

As the field went from building one-off toolsets tied to specialized KR formats to an increasingly solid and reusable infrastructure, a wide range of researchers in computer science and other scientific fields

could focus on solving the more in-teresting problems. Today, OWL and RDF are integrated into so many toolsets that users barely notice them:

What amazes me today with both

OWL and RDF is that you bump into

people all the time who are building

applications with those tools and

infrastructure that you don’t know.

The stuff just works now, so they can

download it off the Web and build

it into applications. Users are pretty

happy.

The efforts to standardize RDF and OWL have produced unintended benefits for our field. Those who use these technologies daily typically have no idea about the innovators who took the time to reach across research projects to come up with a unified and general-purpose ap-proach that was widely usable:

As a community, we haven’t been

very good about embracing it as a

great success and saying, “We’ve done

a really good job here—we built stuff

that people are using and it works.”

It works off the shelf these days—the

tools are pretty robust, and we should

be proud of that as an achievement of

the community.

For more information about OWL, please see www.w3.org/2001/sw/wiki/OWL.

Charles Severance, Computing Conversations column editor and Computer’s multimedia editor, is a clinical associate professor and teaches in the School of Information at the University of Michigan. Follow him on Twitter @drchuck or contact him at [email protected].

Selected CS articles and columns are available for free at http://ComputingNow.computer.org.