8
Abstract—An ontology-based multi-layered robot knowledge framework (OMRKF) is proposed to implement robot intelligence to be useful in a robot environment. OMRKF consists of four classes of knowledge (KClass), axioms and two types of rules. Four KClasses including perception, model, activity and context class are organized in a hierarchy of three knowledge levels (KLevel) and three ontology layers (OLayer). The axioms specify the semantics of concepts and relational constraints between ontological elements in each OLayer. One type of rule is designed for relationships between concepts in the same KClasses but in different KLevels. These rules will be used in a way of unidirectional reasoning. And, the other types of rules are also designed for association between concepts in different KLevels and different KClasses to be used in a way of bi-directional reasoning. These features will let OMRKF enable a robot to integrate robot knowledge from levels of sensor data and primitive behaviors to levels of symbolic data and contextual information regardless of class of knowledge. To show the validities of our proposed OMRKF, several experimental results will be illustrated, where some queries can be possibly answered by using uni-directional rules as well as bi-directional rules even with partial and uncertain information. I. INTRODUCTION HE household service robot has to understand user’s requirements and the environments where it exists, and then the robot must carry out its missions with its primitive behaviors. For this, a robot needs many kinds of data from low level sensor data to high level symbolic data. High level perceptual tasks such as context awareness, object Manuscript received August 11, 2007. This work was performed by a grant of the Second Stage of BK21 Research Division for Advanced IT Education Program on Industrial Demand, and for the Intelligent Robotics Development Program, one of the 21st Century Frontier R&D Programs funded by Korea Ministry of Commerce, Industry and Energy. I. H. Suh is with the College of Information and Communications, Hanyang University, Korea. (phone: 82-2-2220-0392; fax: 82-2-2281-3833; e-mail: [email protected]). G. H. Lim is with the College of Information and Communications, Hanyang University, Korea. (e-mail: [email protected]). W. Hwang is with the Department of Industrial Engineering, KAIST, Daejeon, Korea. (e-mail: [email protected]). H. Suh is with the Department of Industrial Engineering, KAIST, Daejeon, Korea. (e-mail: [email protected]). J. H. Choi is with the School of Computing, Soongsil University, Seoul, Korea. (e-mail: [email protected]). Y. T. Park is the School of Computing, Soongsil University, Seoul, Korea. (e-mail: [email protected]). recognition and navigation are essential for intelligent robots. Also a robot must combine its atomic behaviors such as goto, turn and avoiding obstacle to complete the high level service such as delivery service. In conventional robot systems, those tasks are developed independently, and they have their own algorithms and data structure. That may lead to some difficulties in sharing or reusing their knowledge. Thus, robot knowledge has to be developed to be sharable and growing whenever and wherever it is necessary. There are many challenges for robots. It is difficult for robots to perform complex tasks with their primitive behaviors. For example, it is not easy for robots to navigate, while avoiding obstacles and finding objects which are partially observable or unobservable. Especially, for the grounding of sensory signals to their corresponding symbolic representations [2], there will be necessary to have robust object recognition methods [15] by combining many local features [12] with several other visual features such as shape, color and texture in a probabilistic and/or ontological approach [8]. Ontology [1], [6] has been defined as “something that is similar to a dictionary or glossary, but with greater detail and structure that enables computers to process its content [5].” In this paper, we will propose a robot knowledge framework which includes robot-centered and human-centered ontologies. Robot-centered ontology which is sharable and populating among different robots needs to be developed to enable robots to process common concepts including facts, knowledge and functions by employing their own sensors and behaviors. Robot-centered ontology should be designed from the perspective of robots, because robots perceive the world in a different way as humans do [9]-[11]. On the other hand, in many applications, robots need to interact with humans. For better communication with humans, human-centered ontology has been also required to be used in our system. In this work, Horn clause is used for representation of our robot knowledge framework called ontology-based multi-layered robot knowledge framework (OMRKF) which explains various classes of knowledge with uni-directional reasoning and/or bi-directional reasoning. HC enables robots to use expressive inference mechanisms that find hidden knowledge with well defined rules [3]. Thus, it is possible to clarify missed or uncertain data that may be Ontology-based Multi-layered Robot Knowledge Framework (OMRKF) for Robot Intelligence Il Hong Suh, Senior Member, IEEE, Gi Hyun Lim, Wonil Hwang, Hyowon Suh, Member, IEEE, Jung-Hwa Choi, Young-Tack Park, Member, IEEE T Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems San Diego, CA, USA, Oct 29 - Nov 2, 2007 TuB4.3 1-4244-0912-8/07/$25.00 ©2007 IEEE. 429

Ontology-Based Multi-Layered Robot Knowledge Framework (OMRKF…incorl.hanyang.ac.kr/xe/paper/ic/ic2007-5.pdf · 2009-01-21 · Abstract An ontology-based multi-layered robot knowledge

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Ontology-Based Multi-Layered Robot Knowledge Framework (OMRKF…incorl.hanyang.ac.kr/xe/paper/ic/ic2007-5.pdf · 2009-01-21 · Abstract An ontology-based multi-layered robot knowledge

Abstract—An ontology-based multi-layered robot

knowledge framework (OMRKF) is proposed to implement robot intelligence to be useful in a robot environment. OMRKF consists of four classes of knowledge (KClass), axioms and two types of rules. Four KClasses including perception, model, activity and context class are organized in a hierarchy of three knowledge levels (KLevel) and three ontology layers (OLayer). The axioms specify the semantics of concepts and relational constraints between ontological elements in each OLayer. One type of rule is designed for relationships between concepts in the same KClasses but in different KLevels. These rules will be used in a way of unidirectional reasoning. And, the other types of rules are also designed for association between concepts in different KLevels and different KClasses to be used in a way of bi-directional reasoning. These features will let OMRKF enable a robot to integrate robot knowledge from levels of sensor data and primitive behaviors to levels of symbolic data and contextual information regardless of class of knowledge. To show the validities of our proposed OMRKF, several experimental results will be illustrated, where some queries can be possibly answered by using uni-directional rules as well as bi-directional rules even with partial and uncertain information.

I. INTRODUCTION HE household service robot has to understand user’s requirements and the environments where it exists, and

then the robot must carry out its missions with its primitive behaviors. For this, a robot needs many kinds of data from low level sensor data to high level symbolic data. High level perceptual tasks such as context awareness, object

Manuscript received August 11, 2007. This work was performed by a grant of the Second Stage of BK21 Research Division for Advanced IT Education Program on Industrial Demand, and for the Intelligent Robotics Development Program, one of the 21st Century Frontier R&D Programs funded by Korea Ministry of Commerce, Industry and Energy.

I. H. Suh is with the College of Information and Communications, Hanyang University, Korea. (phone: 82-2-2220-0392; fax: 82-2-2281-3833; e-mail: [email protected]).

G. H. Lim is with the College of Information and Communications, Hanyang University, Korea. (e-mail: [email protected]).

W. Hwang is with the Department of Industrial Engineering, KAIST, Daejeon, Korea. (e-mail: [email protected]).

H. Suh is with the Department of Industrial Engineering, KAIST, Daejeon, Korea. (e-mail: [email protected]).

J. H. Choi is with the School of Computing, Soongsil University, Seoul, Korea. (e-mail: [email protected]).

Y. T. Park is the School of Computing, Soongsil University, Seoul, Korea. (e-mail: [email protected]).

recognition and navigation are essential for intelligent robots. Also a robot must combine its atomic behaviors such as goto, turn and avoiding obstacle to complete the high level service such as delivery service. In conventional robot systems, those tasks are developed independently, and they have their own algorithms and data structure. That may lead to some difficulties in sharing or reusing their knowledge. Thus, robot knowledge has to be developed to be sharable and growing whenever and wherever it is necessary.

There are many challenges for robots. It is difficult for robots to perform complex tasks with their primitive behaviors. For example, it is not easy for robots to navigate, while avoiding obstacles and finding objects which are partially observable or unobservable. Especially, for the grounding of sensory signals to their corresponding symbolic representations [2], there will be necessary to have robust object recognition methods [15] by combining many local features [12] with several other visual features such as shape, color and texture in a probabilistic and/or ontological approach [8].

Ontology [1], [6] has been defined as “something that is similar to a dictionary or glossary, but with greater detail and structure that enables computers to process its content [5].” In this paper, we will propose a robot knowledge framework which includes robot-centered and human-centered ontologies. Robot-centered ontology which is sharable and populating among different robots needs to be developed to enable robots to process common concepts including facts, knowledge and functions by employing their own sensors and behaviors. Robot-centered ontology should be designed from the perspective of robots, because robots perceive the world in a different way as humans do [9]-[11]. On the other hand, in many applications, robots need to interact with humans. For better communication with humans, human-centered ontology has been also required to be used in our system.

In this work, Horn clause is used for representation of our robot knowledge framework called ontology-based multi-layered robot knowledge framework (OMRKF) which explains various classes of knowledge with uni-directional reasoning and/or bi-directional reasoning. HC enables robots to use expressive inference mechanisms that find hidden knowledge with well defined rules [3]. Thus, it is possible to clarify missed or uncertain data that may be

Ontology-based Multi-layered Robot Knowledge Framework (OMRKF) for Robot Intelligence

Il Hong Suh, Senior Member, IEEE, Gi Hyun Lim, Wonil Hwang, Hyowon Suh, Member, IEEE, Jung-Hwa Choi, Young-Tack Park, Member, IEEE

T

Proceedings of the 2007 IEEE/RSJ InternationalConference on Intelligent Robots and SystemsSan Diego, CA, USA, Oct 29 - Nov 2, 2007

TuB4.3

1-4244-0912-8/07/$25.00 ©2007 IEEE. 429

Page 2: Ontology-Based Multi-Layered Robot Knowledge Framework (OMRKF…incorl.hanyang.ac.kr/xe/paper/ic/ic2007-5.pdf · 2009-01-21 · Abstract An ontology-based multi-layered robot knowledge

produced by noisy robot sensors. In our inference engine, evidences in the same knowledge classes are firstly found by applying simple uni-directional rules. If there are not sufficient evidences, bi-directional rules are found to access knowledge at different knowledge classes.

In Section 2, the details of OMRKF and OMRKF-based knowledge reasoning mechanisms are discussed. Some practical examples are provided to show the validities of OMRKF in Section 3. Finally in Section 4, concluding remarks are provided.

II. ONTOLOGY-BASED MULTI-LAYERED ROBOT KNOWLEDGE FRAMEWORK (OMRKF)

A cognitive service robot perceives objects with its sensors, models a world where it exists, plans some sequences of tasks, performs tasks with its own behaviors, and perceives again [13]. To comply with such cognitive capabilities, OMRKF is designed to be composed of knowledge boards (KBoards) and rules where KBoards is composed of 4 classes of knowledge (KClass) such as perception, model, activity and context as shown in Fig. 1. Specifically, perception, model and activity are basic knowledge that a mobile robot has to access for localization and navigation. And, context is a characteristic environmental situation around robots and can provide clues of the proper action selection mechanism for robot.

Each knowledge class has 3 knowledge levels (KLevel)

such as high level (P3, M3, C3, and A3), middle level (P2, M2, C2, and A2) and low level knowledge (P1, M1, C1, and A1). Especially, low level knowledge of perception class and activity class (numerical descriptor (P1) and behavior (A1)) are robot-specific knowledge for its own sensory motor capabilities. Middle level knowledge (visual feature (P2) and task (A2)) are robot common knowledge which is abstraction level of knowledge hiding the details of a particular set of low level sensor data and motor commands. The others including model class, context class and high level knowledge of perception and activity classes are common knowledge for robots as well as humans. Each knowledge level has 3 ontology layers (OLayer) such as meta ontology layer as generic knowledge, ontology schema layer as domain knowledge and ontology instance layer as knowledge instance. Meta ontology can be templates of ontology schema layer, and ontology schema layer is instantiated to instance layer. Also we define axioms and rules which represent relation between knowledge levels and/or ontology layers. Axioms specify the semantics of concepts and relational constraints at each OLayer. Rules are used to specify the relationships at different OLayers, KLevels or KClasses.

We define formal representations of OMRKF in the Appendix 1. The notation and semantics of our framework are presented based on KAON [7] and previous research works of ontology-based context understanding [9].

has_VC

MM

Model

Object

container

tableware

color

Space (M3)

Object (M2)

Object feature (M1)

flowerPot1

Kitchen

AA

ActivityService (A3)

Task (A2)

Behavior (A1)goto

CFindObject

gotoSpace

extracthue

Avoidobstacle

Localization

turn

Delivery Service

FindObject

NavigationGenerateContext

use

extract SIFT

deliveryservice1

navigation1

Numerical Descriptor (P1)

PP

Visual concept (P3)

Visual feature (P2)

Perception

texture

SIFTfeature bluish

hueValue

range

SIFT_match

uniform

color

GaborFilter

CC

HLContext

dayTime

under

right left

Context

Move-near

Object-fixed

on

TemporalContext

High level Context (C3)

Temporal Context (C2)

Spatial Context (C1)

SIFTDescriptor

livingroom1

targetObject

cup

flowerPot

cup1

texture

locatekitchen1

has_extractor

space

targetSpace

SIFT_match

SpatialContext

evening1

nightevening

table1

table

on1

Visual Concept

Object feature

bluegreenish

blue

findObject1

CFindobject

SIFT_match01

color1

color01

SIFT_match01

greenishbleu01

SIFT feature01

0.09

1.235…extract SIFT1

goto

gotoSpace

extract hue1

Living room

Node node5crowd

In danger

left1

0.128…

crowd1

has_relationmatch object

recognize object

Inverse of

recognize context

has context

has_extractor

has_obj

Visual Feature

NumericalDescriptor

Inverse of

Service

Task

Behavior

Encoder

Fig. 1. Example of OMRKF OMRKF has 4 level of knowledge; Perception, Model, Context and Activity. Each level of knowledge has 3 knowledge layer; high level, middle level and low level knowledge layer. Each knowledge layer has 3 ontology layer; meta ontology layer (scarlet dotted circle), ontology layer (blue circle) and ontology instance layer (yellow dashed circle). And there are axioms and rules. Blue dashed lines are linked from ontology layers to ontology instance layers. Normal arrows represent concept hierarchy with description logic, dashed arrows are uni-directional rules and dashed dotted arrows are bi-directional rules

430

Page 3: Ontology-Based Multi-Layered Robot Knowledge Framework (OMRKF…incorl.hanyang.ac.kr/xe/paper/ic/ic2007-5.pdf · 2009-01-21 · Abstract An ontology-based multi-layered robot knowledge

A. KClass - Perception The perception KClass has 3 KLevels; P1, P2 and P3. P1 is

the numerical descriptor level that includes a set of numerical descriptor of image processing algorithms such as SIFT[12], hue and Gabor Filter, and they are produced by robot own sensors and data processing algorithms. P2 is the visual feature level that includes visual features such as SIFT, hue and texture feature which are extracted by numerical descriptors in P1. P3 is the visual concept level that is anchored with visual features in P2 and object feature in model class. And, each visual concept has the property of algorithm name such as hue extractor and SIFT extractor. That property enables a vision module to call proper algorithms for visual concepts.

The bottom left of Fig. 1 shows an example of perception knowledge class. When object recognition is requested, acquired image is firstly segmented roughly n by m blocks according to points of inflection of edge histogram. For each block of segment, values of numerical descriptors and visual features are obtained by means of recommended visual feature extractors. Suppose that a segmented block has been recommended to be processed by visual concept of color and SIFT, and their corresponding properties have been obtained as greenish blue and some matched SIFT descriptors. In this case, as values of numerical descriptors, there are computed “0.09” for hue values, and two 128byte SIFT descriptors.

B. KClass - Model The model KClass has 3 knowledge levels; M1, M2 and

M3. M1 is the object feature level that includes parts of objects, object visual features; color, texture and SIFT. And, that level is redundant with visual concept level of perception class for anchoring perception class with model class. M2 is the object level that includes object name and its functionality. And M3 is the space level which includes metric map, topological map and semantic map. Moreover, space level includes Voronoi nodes which are linked to objects observed at each node. Those nodes are used to plan a sequence of behaviors for a robot to navigate in the space.

The top left of Fig. 1 shows an example of model knowledge class. In the example, cup instance has color and SIFT object features. These object features are matched with visual concept instances of perception. For robust symbol grounding [4], [15] from numerical descriptors in perception class to object level in model class, we use not only general object recognition algorithms using many local features and and/or color features, but also high level knowledge including space information such as object location and robot location.

C. KClass - Context The context KClass has 3 KLevels; C1, C2 and C3. C1 is the

spatial context level that has spatial concept such as on, in, left and right which are inferred by using the instances of object level and space level in model class. C2 is the temporal context level that has temporal concepts defined by Allen [16]. Temporal contexts including before, after, met-by, overlaps and meets can be inferred by using the instances of model class and spatial context class. And, C3 is the high level context. For example, in danger and crowd for navigation can be given in C3. Context is not a list of objects and their locations, but implies abstract and characteristic situations that can be represented by relationships between objects and object properties. Low level spatial and temporal contexts respectively are inferred by objects and their locations, and by time intervals of instantiation of spatial contexts.

The top right of Fig. 1 shows an example of context KClass. In the example, spatial contexts of on and left are instantiated from the instances of model class. These spatial contexts have two properties; subjective and objective. And, the range of two properties is given in object level at model KClass. For example, on context has subjective property as cup and objective property as table. And the high level context crowd can be inferred from spatial and/or temporal contexts.

D. KClass - Activity The activity KClass has 3 knowledge levels; A1, A2 and

A3. A1 is the behavior level including atomic functions of a robot, such as goto, turn, extractSIFT. A2 is the task level in which a task is described by some combinations of behaviors. Task can be defined as a short term sequence of behaviors that includes gotoSpace, localization and CFindObject. CFindObject implies a task of finding object in an image. A3 is the service level describing long term goals that includes delivery service, navigation, Find Object and generate context.

Each ontology schema layer of activity class is instantiated by a planner. When a planner is requested to plan, it firstly gets all the ontology instances of space level and the instances of space relevant objects. Secondly, the planner makes the instances of task knowledge level and their sequences which are properties of each ontology instance of task level. Lastly, the planner makes the instances of behavior level and their sequences for each task instance. In this paper, we use abductive event calculus planner for planning and instantiations of activity KClass [14].

The bottom right of Fig. 1 shows an example of activity knowledge class. For CFindObject task, extract SIFT and extract hue behaviors are performed, and that task is one part of delivery service whose property of target object is cup.

431

Page 4: Ontology-Based Multi-Layered Robot Knowledge Framework (OMRKF…incorl.hanyang.ac.kr/xe/paper/ic/ic2007-5.pdf · 2009-01-21 · Abstract An ontology-based multi-layered robot knowledge

E. Axioms and Rules OMRKF includes axioms and rules for inferring some

useful facts by using ontology schema and ontology instances occurring at KClasses and KLevels.

Axioms are generally taken for granted as valid without proof. In our OMRKF, axioms are defined in ontology layer and are used to check whether instances are well generated without consistency. In the Fig. 1, there are axioms such as inverse relation of left and right or on and under.

The general form of rules is “IF-THEN.” Rules for reasoning are necessary to find out concept or relation. Here, we define rules as concepts and relation between different ontology layers, different knowledge levels or different knowledge classes. There are two types of rules; one is rules (RU) for uni-directional reasoning in the same KClasses and the other is rules (RB) for bi-directional reasoning between different KClasses.

F. Query-based Knowledge Reasoning Query-based reasoning attempts to find evidences from a

goal. First of all, a query-based goal is generated. The goal can be “THEN” part of rules to be matched when there are all necessary evidences for “IF” parts of rules. If not matched, the goal is divided into sub-goals and more evidences are recursively searched for.

By letting two types of rules be applied one after another, search space for the inference can be reduced. When a query is given at a knowledge class, uni-directional rules for the knowledge class are firstly applied to infer correct answers by involving evidences that are confined within the same knowledge class at which the query is given. If the query is failed to be answered by those uni-directional rules, bi-directional rules are applied to find the answer for the query among different knowledge classes. At that time, some instances generated by first applied uni-directional

rules can be also employed as one of evidences for bi-directional rules. For example, the object location of model class can be used not only as a priori knowledge for object recognition, but also as an evidence for space classification. And, contextual information is inferred by model class, and can be simultaneously used for object recognition in perception class.

Fig. 3. System configuration of our experiments

(a) GUI of monitor program (b) 3D eye view of external cameraFig. 2. Experimental environment, GUI and 3D eye view

((a)a) ((b)b)

((c)c) ((d)d) ((e)e)

Fig. 4. Snapshots to show experimental results

432

Page 5: Ontology-Based Multi-Layered Robot Knowledge Framework (OMRKF…incorl.hanyang.ac.kr/xe/paper/ic/ic2007-5.pdf · 2009-01-21 · Abstract An ontology-based multi-layered robot knowledge

III. EXPERIMENT

A. Overview Our proposed framework for robot knowledge is verified

by the experiment of cup delivery service. Fig. 2 shows our screen shot of monitor program and 3D eye view of external video camera. The GUI of monitor program has 4 frames, upper left frame represents metric map of world model which includes robot location, object location and node, input and output images of vision module are located at lower left frame, upper right frame shows OMRKF instance and log messages, and lower right frame includes command box and control panel.

Fig.3 shows the system configuration. In our experiments, we use “Infortainment Robot” which is one of Korean pilot platform for service robot. But, since our robot does not have arm, delivery service is substituted by find object and generate context about that object. The robot has stereo camera from which vision module can get stereo images. And vision module calls several vision algorithms such as hue, SIFT and Gabor filter. Reactive manager sends control commands to motors to perform primitive behaviors and acquire numerical sensor data from sensors including encoder and sonar sensor. User commands are acquired by input manager and it displays messages and logs in text boxes. Arbitration among modules and selection which module runs is run by task manager. Plans are made by event calculus planner for requested service. OMRKF is robot knowledge platform represented by Horn clause. And, knowledge manager gets requests and makes proper queries to OMRKF with ontology APIs including ontology create APIs, ontology retrieve APIs and ontology update/delete APIs, as summarized in Appendix II.

Fig. 4 includes snapshots to show experiments. In Fig. 4(a), a robot takes an order “Find cup” at the location in the living room as marked in Fig. 2(a). The robot is assumed to have map of experimental space as an instance of ontology schema of space layer (M3). And, ontology instances of the

object (M2) in the space are also assumed to be previously instantiated as shown in Fig. 5. There are kitchen_001 and livingroom_001 in the space layer, and cup_001 and Table_001 are located in kitchen_001. To find cup, it will be inferred that robot has to move from living room to kitchen at node_006. Our abductive event calculus planner makes a sequence of delivery service (A3) as one of instance of middle level activity (A2); (1) gotoSpace (kitchen), (2) cFindObject (cup), (3) generateContext (cup), (4) gotoSpace (livingroom), and (5) report (context). Each task (instance of A2) is also planed a sequence of primitive behaviors (A1). And Fig. 6 shows the activity class of ontology and ontology instance for delivery service. The high level service is composed of gotoSpace, localization and cFindObject, and those tasks are composed of robot primitive behaviors such as goto, turn, extractSIFT and extractHue. In Fig. 4(b), the robot moves to kitchen which is the place inferred from ontology instances of perception KClass. Here, a perception instance and its property are segment3: cup and has_obj(kitchen, cup) of cup, respectively. When the robot navigates, ontology instances of object knowledge level (M2) in model KClass can be used as landmarks. If the robot moves to node_006, refrigerator_001 linked to node_006 can be a landmark for localization. Thus, the robot tries to find refrigerator_001 near node_006. Also, ontology instances of object level (M2) and space level (M3) can be evidences of bi-directional reasoning for object recognition.

In Fig. 4(c), when the robot tries to find cup in a current visual scene, two types of reasoning are used. The details of inference for object recognition are following in Section 5.B. In Fig. 4(d), the robot tries to generate spatial context near the cup to report with lists of objects which are observed by our vision module. In Fig. 4(e), the robot comes back to

Space: Kitchen , Living roomObject: Sofa, Table, Cup, etc

Fig. 5. Snapshot representing model ontology and ontology instance obtained from our experiments

Delivery ServiceTarget object: Cup_001

Fig. 6. Snapshot representing activity ontology and ontology instance of our experimental cup delivery service

433

Page 6: Ontology-Based Multi-Layered Robot Knowledge Framework (OMRKF…incorl.hanyang.ac.kr/xe/paper/ic/ic2007-5.pdf · 2009-01-21 · Abstract An ontology-based multi-layered robot knowledge

initial location and reports the spatial context near the “cup.” It is noted that all object recognition and localization tasks are performed as background tasks.

B. Examples of Knowledge Reasoning Objects can be recognized by using uni-directional and/or

bi-directional reasoning rules. When Find cup task is requested, vision module firstly asks knowledge manager to get visual concept of cup and extractor algorithm from instances of OMRKF. Greenish

blue of hue and more than five identical SIFT features (P2) are informed to be necessary as properties of visual concept (P3) of cup. Then, hue extractor and SIFT extractor are executed to identify such visual features of cup for candidate segment blocks of current visual scene. If those visual features are identified in a segment, then the segment is registered as cup by recommendation of cup match rule in a set of uni-directional reasoning rules, as shown in the Fig. 7.

If those visual features are not totally identified, but partially identified, then vision module asks knowledge manager to get more relevant rules. In lower part of Fig. 8, there is a bi-directional rule (RL) that has conditions given in terms of space (M3) and context (C1); “cup is usually located on a table in a kitchen.” Since visual features of current visual scene are blue of hue value and four SIFT matches, the current scene is considered to include not cup but candidate(cup). And thus, knowledge manager drives bi-directional rules to infer table as an object which is most relevant to cup. And, knowledge manager requests vision module to find table in the current visual scene, and table recognition processes are performed in a similar way of identifying cup by using uni-directional rules. Location evidences such as kitchen and on the table let the system to find cup by using bi-directional rules for cup recognition. This whole process shows that high level knowledge will be helpful to enhance low level image processing for object recognition.

In Fig. 9, if there are three recognized objects wall3, table and cup then kitchen may be inferred by uni-direction rule (RU) that means “If more than 3 instances are kitchen relevant object then the space is kitchen and there are the space relevant objects in the ontology schema layer such as has_obj(kitchen,wall), has_obj(kitchen,table) and has_obj(kitchen, cup).

Table 1. describes Prolog-based representations of ontology schema, ontology instance and rules which are used in our experiments for uni-directional and/or bi-directional reasoning.

Fig. 7. Uni-directional Reasoning for Object Recognition Three left parts are sequence diagrams and three right parts are OMRKF whichincludes operational ontology instance, ontology layer and rules used for reasoning experiments. And the detailed ontology and rules are described in Table I. Dark arrows indicate flow of operational sequence and blankarrows imply query and answer to/from OMRKF.

Fig. 9. Reasoning process of space identification

Fig. 8. Bi-directional Reasoning for Object Recognition (a) “cup” is not exactly matched with visual feature of “cup” instance by uni-directional rules in a visual image. (b) “cup” is inferred by using bi-directional rule and other knowledge level ontology and ontology instance.

434

Page 7: Ontology-Based Multi-Layered Robot Knowledge Framework (OMRKF…incorl.hanyang.ac.kr/xe/paper/ic/ic2007-5.pdf · 2009-01-21 · Abstract An ontology-based multi-layered robot knowledge

From those experiments, it can be observed that OMRKF let a robot carry out its mission in spite of uncertain and partial information.

IV. CONCLUSION We proposed an ontology-based multi-layered robot knowledge framework (OMRKF) for the household service robot where ontology is sharable and populating. OMRKF enables a robot to robustly recognize objects and successfully navigate while inferring localization-related knowledge in spite of hidden and partial data due to noisy sensors data. Moreover, OMRKF lets robots answer any queries by applying simple uni-directional rules in the same knowledge classes and bi-directional rules at different knowledge classes.

REFERENCES [1] O. Corcho, and A.G. Perez, “A Roadmap to Ontology Specification

Languages,” Lecture Notes in Computer Science, 2000, pp. 80-96. [2] R. Pfeifer, and C. Scheier, “Understanding Inteligence,” MIT press,

2001, ch. 3. [3] I. Bratko, “Prolog programming for artificial intelligence,” 3rd ed.

Pearson education, 2001, pp. 57. [4] N. Maillot, M. Thonnat, and A. Boucher, “Towards ontology-based

cognitive vision,” Machine Vision and Applications, pp.33-40, 2004. [5] J. Schoening, IEEE P1600.1 Standard Upper Ontology Working

Group, http://suo.ieee.org.

[6] F. Baader, D. Calvanese, D. Mcfuinness, D. Nardi and P. Patel-schneider, “The Description Logic Handbook,” Cambridge university press, 2003, ch. 2.

[7] E. Bozsak, M. Ehrig, S. Handschuh, A. Hotho, A. Maedce, B. Motik, D. Oberle, C. Schmitz, S. Staab and L. Stojanovic, “KAON – Towards a Large Scale Semantic Web,” Lecture notes in Computer Science, 2002, pp. 304-313.

[8] S. Thrun, W. Burgard and D. Fox, “Probabilistic Robotics.” MIT Press 2005, ch. 1.

[9] W. Hwang, J. Park, H. Suh, H. Kim and I.H. Suh, “Ontology-based Framework of Robot Context Modeling and Reasoning for Object Recognition,” Lecture notes in Computer Science, 2006, pp. 596-606.

[10] E. Wang, Y. S. Kim, H. S. Kim, J.H. Son, S. Lee, and I. H. Suh, “Ontology Modeling and Storage System for Robot Context Understanding,” Lecture notes in Computer Science, 2005, pp. 922-929.

[11] E. A. Topp, H. Huttenrauch, H. Christensen, and K. Severinson Eklundh, “Bringing together human and robotic environment representations – a pilot study,” in Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Beijing, China, 2006.

[12] D. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, Vol.60, pp.91-110, 2004.

[13] R.C Arkin, “Behavior-based Robotics,” MIT Press, 1998. [14] K. Eshghi, “Abductive planning with event calculus,” Proc. Of the

Fifth International Conference on Logic Programming, pp562-579, 1988.

[15] S. Vasudevan, S. Gachter, M. Berger, and R. Siegwart, “Cognitivemaps for mobile robots an object based approach,” in In Proc. of the IEEE/RSJ IROS 2006 Workshop: From Sensors to Human SpatialConcepts, Beijing, China, 2006.

TABLE I HORN CLAUSE LOGIC-BASED REPRESENTATION OF EXAMPLAR ONTOLOGY AND RULES FOR OUR OMRKF

Instance Layer Rule Ontology Layer Uni-directional reasoning for Object recognition M1 and M2

type(cup_001, cup) . has_vc(cup_001, hue, greenish blue). has_vc(cup_001, SIFT, 5).

RU: : visual concept

IF visual_feture_instnace::X has extractor::D AND X has numerical_descriptor::V AND ‘visual_concept: C’ has range from L to H AND L < V < H THEN X has ‘visual concept: C’

P1 and P2

segment3(hue_val, 0.09). segment3(SIFT_match, 5). segment3(hue, greenish_blue). segment3(SIFT, SIFT_match).

M2 type ( segment3, cup)

RU: : object match

IF Object_instance U has visual concept instances: V1 ... VN AND has_vc(Obj_X, V1) ... and has_vc(Obj_X, VN) AND all Obj_X relevant visual concepts are exists in V1..VN THEN U is instance of type(U, Obj_X)

P2

and P3

has_extractor(hue, hue_extract). has_ extractor (SIFT, sift_extract). has_ extractor (shape, shape_extract). range(hue, greenish blue, 0.0, 0.1). range(SIFT_match, cup, 5, 1000).

Uni-directional reasoning for Object recognition (Not matched case)

P1 Segment4(hue_val, 0.12). Segment4(SIFT_match, 4).

RU: : visual concept

IF visual_feture_instnace::X has extractor::D AND X has numerical_descriptor::V AND ‘visual_concept: C’ has range from L to H AND L < V < H THEN X has ‘visual concept: C’

P2 Segment4(hue, blue). Segment4(SIFT, SIFT_candidate).

P2

and P1

range(hue, blue, 0.1 0.2. range(SIFT_candidate, 3, 4).

M2 type(segment4 , candidate(cup))

RU: : object candidate

IF Object_instance U has visual concept instances: V1 ... VN AND has_vc(Obj_X, V1) ... and has_vc(Obj_X, V1) AND all Obj_X relevant visual concepts are exists in candidate of V1..VN THEN U is candidate (U, Obj_X) M1

has_vc(cup, hue, blue). has_vc(cup, SIFT, matched 4).

Bi-directional reasoning for Object recognition (Matched case)

M2 type(segment4 , candidate(cup)) type(segment5, table_001)

RB: : In

IF X is object And Y is space And location of X is included in Y THEN X is in Y

M3 type (space, Kitchen_001).

C1 on(cup_001, tabke_001) M2 type(segment4, cup)

RB: : object recognition

IF candidate (U, Obj_X) AND has_obj( Space, Obj_X) AND on(Obj_X, Obj_2) AND has_obj( Space, Obj_2) THEN type(U, Obj_X)

M3 has_obj(kitchen, cup) has_obj(kitchen, table)

Autonomous reasoning for space classification

M2 type (segment2, wall3) type(segment4, cup_001) type(segment5, table1_001)

M3 type(space, kitchen)

RU: : space recognition

IF space_instance S AND more than 3 object instances are has_obj(Space, Obj) THEN type(S, Space) IF space_instance S AND more than 2 object instances are has_obj(Space, Obj) THEN S is candidate(S, Space)

M3

has_obj(kitchen, table) has_obj(kitchen, chair) has_obj(kitchen, refrigerator) has_obj(kitchen, cup) has_obj(kitchen, wall)

435

Page 8: Ontology-Based Multi-Layered Robot Knowledge Framework (OMRKF…incorl.hanyang.ac.kr/xe/paper/ic/ic2007-5.pdf · 2009-01-21 · Abstract An ontology-based multi-layered robot knowledge

[16] J. F. Allen, “Planning as Temporal Reasoning,” Proc. Of the Second Int’1 Conf. On Principles of Knowledge Representation and Reasoning, Cambridge, MA April, 1991

APPENDIX II STANDARD ONTOLOGY API

Category APIs and parameters

Ontology creation API

createOntologyInstance(+ClassName, -InstanceId) setPropertyValue(+InstanceId, +PropertyName, +Value)

Ontology retrieve API

getOntologyInstances(+ClassName, -InstanceList) getProperties(+[ClassName|InstanceId], -PropertyList) getPropertyValues(+InstanceId, +PropertyName, -Value)

Ontology delete/update API

retractOntologyInstance(+InstanceId) retractPropertyValue(+InstancdId, +PropertyName, +PropertyValue) update(+InstanceId, +PropertyName, +PropertyValue)

APPENDIX I FORMAL MODEL FOR OMRKF

Definition 1. Ontology-based Multi-layered Robot Knowledge Framework

OMRKF := (KBoards, R0) Such that KBoards are knowledge boards and R0 is a finite set of rules.

Definition 2 A set of Knowledge Boards of OMRKF consists of 4Knowledge Types;

KBoards := {KClassesi | 1 ≤ i ≤ 4} We define a knowledge board KBoardi for i ∈ N (set of natural numbers), 1 ≤ i ≤ 4. KClass1 is a class of knowledge for the perception (P), KClass2 is a class of knowledge for the model (M), KClass3 is a class of knowledge for the activity (A), KClass4 is a class of knowledge for the context (C).

Definition 3 A set of a Type of Knowledge of OMRKF consists of 3 Knowledge Levels;

KClassi := {KLevelij |1≤ i ≤ 4, 1 ≤ j ≤ 3} We define a Knowledge Level for i,j ∈ N (set of natural numbers), 1≤ i ≤ 4, 1 ≤ j ≤3. KLeveli1 is a knowledge level for the low level knowledge (P1, M1, A1, C1), KLeveli2 is a knowledge level for the middle level knowledge (P2, M2, A2, C2), KLeveli3 is a knowledge level for the high level Knowledge (P3, M3, A3, C3).

Definition 4 A set of a Knowledge Level of OMRKF consists of 3 Ontology Layers;

KLevelij:= {OLayerijk |1≤ i ≤ 4, 1 ≤ j ≤ 3, 1 ≤ k ≤ 3} We define a Ontology Layer for i,j,k ∈ N (set of natural numbers), 1≤ i ≤ 4, 1 ≤ j ≤3, 1 ≤ k ≤ 3. OLayerij1 is an ontology layer for the meta-ontology layer (Pj1, Mj1, Aj1, Cj1), OLayerij2 is an ontology layer for the ontology schema layer (Pj2, Mj2, Aj2, Cj2),OLayerij3 is an ontology layer for the ontology instance layer (Pj3, Mj3, Aj3, Cj3).

Definition 5 The ijk-th ontology layer in OMRKF consists of 6-tuples;

OLayerijk := (Cpijk, Rijk, Relijk, HijkC, Hijk

R, Aikj0)

For 1≤ i ≤ 4, 1 ≤ j ≤3, 1 ≤ k ≤ 3, Cpijk is a set of concepts in OLayerijk, Rijk is a set of relations in OLayerijk, Relijk is a set of relation functions in OLayerijk, Hijk

C is a set of concept hierarchies in OLayerijk, Hijk

R is a set of relation hierarchies in OLayerijk, Aijk

0 is a set of axioms. Definition 6

The set of axioms of each OLayer are a set of sentences Λ which follows the representation of logical language. Λ is represented by structures of OLayerijk (Pijk, Mijk, Aijk, Cijk) (elements of Cijk, Rijk and Relijk of OLayerijk) for 1 ≤ i ≤ 4, 1 ≤ j ≤ 3, 1 ≤ k ≤ 3, which are the elements in the same OLayerijk. Also, a sentence of Λ specifies the meaning of the elements by describing the relationship of the elements in a OLayerijk. Any sentence in Λ can not be entailed by other sentences in Λ.

Definition 7 Axioms are a structure of 3-tuples;

A0 = {AI, Λ, α} (i) AI is a set of axiom identifiers (ii) Λ is a set of logical sentences, and (iii) α is a set of axiom mapping functions: α: AI Λ

Definition 8 A structure of rules for knowledge board consists of 2 sets of rules;

R0 = (RU, RB) RU is rules between hierarchical layers of KLevelij in the same KClassi for uni-direction RB is rules among hierarchical layers of KLevelij but the different KClassi for bi-direction

Definition 9 The set of layer rules is a set of sentences Ŋ which follows the representation of logical language. Ŋ is represented by 3-tuples of OLayerijk (elements of Cijk, Rijk and Relijk of OLayerijk). A sentence of Ŋ represents the relationship between the three elements of OLayerij (Cijk, Rijk and Relijk), and is used to entail other concept or relation. The rule should include at least two elements, one from a OLayerijk and the other from another OLayerijk.

Definition 10 Rules between hierarchical layers are a structure of 3 tuples:

RU = {RIU, ŊU, βU} (i) RIU is a set of layer rule identifiers (ii) ŊU is a set of logical sentences for layer rules, and (iii) βU is a set of layer rule mapping functions: βU: RIU ŊU

Definition 11 Rules between levels of knowledge are a structure of 3 tuples:

RB = {RIB, ŊB, βB} (i) RIB is a set of association rule identifiers (ii) ŊB is a set of logical sentences for association rules, and (iii) βB is a set of association rule mapping functions: βB: RIB ŊB

Definition 12 Meta-rule is a rule template in the form of

P1 ∧ P2 ∧…∧ Pm ∧ U ⇒ Q1 ∧ Q2 ∧…∧ Qn where Pi (for i=1,…,m) and Qi (for i=1,…,n) are either concepts or relations defined

in meta-ontology layer (OLayerijk where i=1,2,3,4 and j=1,2,3), and U pres Definition 13

A structure of knowledge query; KQueyj := (KBoardsij3, Rij

A) (1 ≤ i ≤ 4, 1 ≤ j ≤ 3) KQuery is a subset of instance of KBoardsij3 Rij

A is a set of rules for association between ontology instances of knowledge level- OLayerij3

436