Autonomous Agents in Collaborative Virtual Environments

Embed Size (px)

Citation preview

  • 8/3/2019 Autonomous Agents in Collaborative Virtual Environments

    1/8

    1

    Autonomous Agents in Collaborative Virtual Environments

    Dr. Stefan Noll

    Christian Paul

    Ralph Peters

    Norbert Schiffner

    Fraunhofer-Institute for Computer Graphics

    Rundeturmstrae 6

    D-64283 Darmstadt, Germany

    +49 6151 155 209

    {noll, paul, peters, schiffne }@igd.fhg.de

    ABSTRACT

    Our world is now entering an age where the current

    understanding of telecommunications and graphics

    computing will be constantly challenged. The universal

    advancement of graphics technology, new business

    models, and the continuing upgrade of global

    infrastructure are transforming the solitary, platform-

    centric 3D computing model. With the availability of

    global information highways intercontinental collaboration

    using 3D graphics will become part of our daily work

    routine.

    The research efforts have been concentrated on

    determining how the distributed workplace can be

    transformed into a shared virtual environment. Interaction

    among people and processes in this virtual world has to be

    provided and improved. To enhance the usability and

    functionality of our collaborative virtual environment we

    integrated software agents into it. These agents support the

    user as well as the designer and the interaction with

    objects in the virtual world. In this paper, we describe the

    basic needs for combining agents and virtual worlds as

    well as techniques to enhance VR environments.

    Keywords

    Collaborative virtual workspace, cooperation, software

    agents, integration, usability, architecture.

    1. INTRODUCTION

    The combination of 3D graphics, spatial audio, object

    interaction, and haptic feedback in a distributed virtualenvironment constitute a multidimensional form of

    telecommunication. This framework addresses the three

    main perception senses of human beings simultaneously.

    These advanced forms of telecommunication make it

    possible to transform workplaces into collaborative virtual

    workspaces (CVW). Scenarios can be tailored to the

    particular needs of an application or user. Possible

    applications range from simple distributed multimedia

    visualizations of scientific data in standard 3D data

    formats to distributed VR environments for teleconferences

    and simulators [1], which demand advanced computer

    technology.

    In this paper, we introduce the use of agents in

    combination with virtual realities as a future

    communication system. By doing so, two major fields of

    Computer Science are tied together: agents, which stem

    from the field of Artificial Intelligence (AI), and 3D-VR

    systems as part of Computer Graphics. The main goal is to

    enhance usability and realism of virtual environments by

    combining 3D objects with intelligent agents, and to

    explore new aspects evolving from this combination.

    As a basis for our work introduced here, we use our

    collaborative virtual workspace. Using this framework we

    built the Virtual Emergency Task Force (VETAF) and

    Virtual Showroom scenarios [4].

    Concepts of software agents, their usability and

    employment in virtual environments, exemplary

    implementations of agents, our scenarios, and relateddevelopments are explained in this article.

    In a virtual reality, objects and avatars are connected to

    agents, which reflect a behavior to other agents and users.

    Agents act autonomously and improve interaction between

    objects. The following short description of the applied

    agent technology gives an introduction into this topic.

    2. SOFTWARE AGENTS

    Software agents are most relevant and applicable for the

    use in real-world domains. Their intelligent behavior

    enables them to automate and delegate cognitive tasks that

    were not feasible for machines in the past. Agents support

    each user individually during a session. They act as arepresentative of their 'employer' in the task they are

    assigned to.

    Several dimensions can classify agent technology: their

    mobility, whether they are deliberative or reactive, their

    appearance, and their roles. Three major attributes may

    describe the behavior of software agents in general (for a

    more complete definition of the term agent please refer to

    [Fehler! Verweisquelle konnte nicht gefunden werden.]

    and [9]):

    Autonomy: The ability to take initiative on what theagent believes is in the users interest. It fulfills its tasks

  • 8/3/2019 Autonomous Agents in Collaborative Virtual Environments

    2/8

    2

    based on internal states, rules, and goals, and does not

    need any guidance by a human.

    Cooperation: The agent is able to engage in complexcommunication with other agents to obtain information

    or help of others. The agent society cooperates to

    accomplish their owners goals.

    Learning: In order to be smart, agents have to adapt totheir environment. They need to learn how to react or

    interact with the system, users, and other agents.

    One major issue of agent technology is cooperation

    between agents. Independent, heterogeneous agents

    therefore need a flexible possibility to communicate with

    each other in order to adapt to their environment. This

    communication facility has to fulfill two major tasks: on

    the one hand, various kinds of information must be

    transported reliably, on the other hand, the content of

    every message has to be understandable by every agent.

    However, the agent does not have to be able to interpret

    the object included in the message. Agent Communication

    Languages (ACL) solve these problems by using

    communication objects with a specified structure. ACL

    messages are well defined and can be processed without

    necessarily knowing about the embedded object of a

    message (content). The most common ACL is the

    Knowledge Query and Manipulation Language (KQML).

    KQML is an agent communication language developed by

    the ARPA supported Knowledge Sharing Effort [2]. It is a

    message format and message handling protocol to support

    runtime knowledge sharing among agents. It provides high

    level access to information and serves for low level

    communication tasks such as automatic error checking.

    3. AVATARS & AGENTS IN WORKSPACES

    The main issue addressed by virtual environments refers to

    social and workspace awareness in CVWs. Avatars address

    this problem by representing users in virtual environments.

    Agents can enhance usability, convenience, and realism

    when the virtual human is present in the collaborative

    virtual workspace.

    3.1. Avatars in CVWs

    Avatars are controversial creatures on the cutting edge of

    user interface designs. They provide new ways for peopleto interact with their computers and with other users on a

    network. The driving force behind avatars is the ongoing

    search for an interface that's easier and more comfortable

    to use, especially for the millions of people who are non-

    computer experts. The earliest computer users were

    engineers and programmers who were fairly comfortable

    with command-line prompts. Today, most users are

    professionals who are fairly comfortable with graphical

    desktops. But a metaphor based on files and folders means

    nothing to a five-year-old, and the abstractions of menus

    and icons are difficult for even some adults to grasp.

    Avatars won't necessarily replace menus, icons, and other

    elements of GUI computing. Instead, they'll play the role

    of helpful assistants or guides. Proponents think avatars

    are tailor-made for the growing virtual communities of on-

    line services and networks.

    Avatars are an important metaphor in CVWs: users can

    determine whom they are sharing the information with andin which particular object the other person is interested.

    Positions can express if users are following the discussion

    and whether they are looking at the same object. They

    express if users want to talk to somebody or if they are

    moving tirelessly and nervously around. Appearances of

    avatars show the status of participants in the world: they

    can express a user's rank in a group, show if users are

    present at work as well as show the individual taste of a

    user.

    Realism in participant representation involves two

    elements: believable appearance and the capability of

    movement.

    This becomes ever more important in a CVW, since the

    participants' positions are used for communication. As an

    example: The distance between avatars controls the

    volume of the participants' audio. The participants local

    environments store the whole scene description and use

    their own avatars to move around the scene. Rendering

    takes place from their own viewpoints. This avatar concept

    in CVW has crucial functions in multi-user virtual

    environments:

    Perception (see if anyone is around)

    Localization (see where the other person is) Identification (recognize the other person)

    Visitation of other person's interest focus (see wherethe persons attention is directed to)

    Using abstract virtual figures for avatar representation

    (Figure 1) fulfils these functions. Our avatar uses a live

    video stream and a business card identifying the user it

    represents. The video screen indicates the direction in

    which the user is currently looking. Each user is assigned

    a unique colour; ears and feet of the avatar as well as the

    selection pointer are all in this colour. To point to an

    interesting position or perform an action the user can use a

    distributed pointer, located on top of the avatar.

    In a collaborative virtual environment, the system has to

    know which objects are in a scene, how to represent them,

    and where they move in case they are not static. Any

    communication between objects, no matter if they are

    under control of a user or the underlying program, is

    difficult to predict and also difficult to implement as they

    should fit in different scenarios. One possibility to solve

    the problem is the use of agents. Each object or object

    group in the virtual collaborative environment is connected

    with its agent (Figure 2). Agents act according to their

  • 8/3/2019 Autonomous Agents in Collaborative Virtual Environments

    3/8

    3

    functionality and are able to enhance interaction between

    objects and the virtual environment.

    Figure 1: Avatar

    Avatars, the graphical representations of participants, are

    also objects. Therefore, agents can act on a higher

    abstraction level, like human assistants. This includes

    repetitive tasks (for example finding a way through the

    scenario and locating a room within it), remembering

    certain facts, which the user forgot, and recapitulating

    complex sequences in an intelligent way (who is in a room

    and which action is she or he performing at the moment).

    Intelligent agents can learn and even make suggestions to

    the user.

    3.2. Agent-Object Pairs in CVWs

    While evaluating the integration of agents intocollaborative virtual workspaces, several fundamental

    functions and tasks have been identified. The central idea

    is building Agent-Object pairs: Every graphical object in

    the virtual environment that represents a certain universal

    service owns an underlying agent. The following actions

    are regarded as essential functions for these pairs:

    Creation: When a new object is brought into the scene,the agent creates the graphical representation and

    distributes the necessary data to every participant of the

    environment.

    Administration: Every action and event concerningthe object is dispatched and then, given certain premises,

    distributed or reacted on by the agent.

    Modification: The user has the possibility to changethe graphical representation of objects (and avatars) in

    form and color. Every object controls these attributesitself. The agent is responsible to keep modifications

    consistent. All new appearances are broadcasted via the

    multicast network.

    Control: The object itself has no information about itsinner state except of appearance; the agent has full

    control over its functionality and position in the

    environment being responsible for moving around,

    contacting other agents or users, perceiving the

    environment, and gathering useful information (in the

    sense of fulfilling a certain task).

    Erasure: Similar to creation, the agent deletes itsgraphical representations in all environments and the

    underlying functionality.

    While developing agents for multi-user environments, the

    following problems were encountered and had to be coped

    with depending on the particular type of application and

    agent:

    One or many underlying agents for the objects: Theprogrammer has to evaluate if it is applicable to use one

    agent for all object instances, or if it is more useful to

    provide an underlying agent for every occurrence of the

    graphical representation. For example, guides in the

    environment (see below) should have their ownunderlying agent but share one information agent. This

    agent is responsible to provide information about the

    environment because it is useful to have one database

    about all rooms and participants taking part in the

    scenario.

    Handling objects: Cloning and multiplication ofobjects in virtual environments can lead to

    inconsistencies. The underlying agents have to keep

    track of these user actions and should be able to replicate

    themselves (in case every object is controlled by its own

    agent), or adapt to the new instance of the generated

    graphical object and provide the functionality controllingit.

    Mobile agents: If the graphical object is attached to amobile agent, its functionality is accessible by one user at

    a time. Scheduling of many requests of using the object

    has to be controlled and dealt with by the agent. For

    example, if you implement the guide agents as mobile

    agents, you might have to contact a receptionist and ask

    for assistance. The receptionist will contact the mobile

    guide agents and request information about availability.

    (For more information about mobile agents see [9].)

  • 8/3/2019 Autonomous Agents in Collaborative Virtual Environments

    4/8

    4

    The possibilities of using agent technology to assist the

    user, populate the environment, and add functionality to

    virtual environments are numerous. In the following, we

    describe the integration of agents into virtual environments

    and explain the use of guides in the scenario.

    4. INTEGRATING AGENTS INTO CVWsIn the previous chapters we described software agents in

    general and the theoretical conditions for the integration of

    agents into VCWs. This chapter explains how we put our

    thoughts into practice.

    Figure 2: CVW and Agent Setup

    Figure 2 shows the general setup, which is split into two

    parts: the multicast net and the agent net. The multicast

    net (solid ring) consists of a multicast backbone (MBone,

    which is explained in the 'Application' Chapter). It

    connects the CVWs and is the basis for transmitting

    continuous audio and video feeds as well as all changes

    that occur in the virtual environments (i.e. who is present,

    how does the avatar look like, what is the current position

    of every participant). The agent net (striped lines)

    interconnects all agents by using the communication

    facilities of ASAP. This partition into two independent

    nets results from the different needs for communication

    and scalability uses. While agents rely on a secure

    submission of their messages, video and audio feeds do not

    have such requirements; they need high-speed connections

    to allow real-time transmissions. Agents come to full effect

    when they form agent societies, consisting of a largenumber of agents acting in and therefore supporting a

    comparably small number of virtual environments.

    So-called CVWAgents link the two subnets together. They

    accept messages from either side and enable the agents to

    control objects in the environments, as well as transferring

    user inputs from the CVWs to the corresponding agents.

    The CVWAgents have a twofold functionality: on the one

    side, they are participants of the workspace by sending and

    receiving multicast messages. On the other, they are agents

    being able to understand KQML messages. Their main

    task is to listen on both nets, filter all information

    necessary for the other side, and translating these

    messages appropriately. For example, when a position

    message on the multicast net is recognized by the

    CVWAgent to be relevant for an agent, it translates this

    message to KQML and sends it to the relevant agent using

    the agent network. Figure 3 displays the central position ofthe CVWAgent.

    Figure 3: The Role of the CVWAgent

    Another example is the interaction between a user and the

    agent society: Using the avatar's graphical user interface

    (GUI), the participant issues a message requesting a

    special service. Assuming that a button was pressed, the

    CVW core will issue a message containing this event on

    the multicast net. The CVWAgent converts this message to

    KQML and sends it to the appropriate agent using ASAP

    and its agent network (for more information on ASAP,

    please refer to Chapter 5.1.) This initiates the execution of

    the desired service.

    An Example: The GuideAgent

    To show the usability of agents in virtual environments,we built a set of agents: DoorAgents restrict access to

    rooms and seek for identification or payment, and

    UserAgents provide information about their employers,

    handle access codes and payments whenever they are

    entitled to [7]. This chapter considers the GuideAgent. Its

    main task is to walk around the environment and provide

    information about objects, participants and the

    environment.

    The guide fulfills all properties of the above mentioned

    agent-object pair: it has its own distinct representation by

    the use of a special avatar and an underlying agent to

    control this graphical representation. The GuideAgentdoes not only send out position messages to move its

    avatar around the scenario but also manages an area of

    interest. If another participant appears within this area, the

    GuideAgent identifies him and offers help. If the user

    refuses this offer, the guide will move on. If help is

    needed, the guide suggests many different services:

    Navigation aid: Especially novices usually havedifficulties in using the different moving modes (i.e.

    walk and fly) to move their avatar. The guide can

    therefore provide a lesson of how to navigate through the

    environment.

    Guide-

    Agent

    Agent

    Door-

    Agent

    UserAgent

    InformationAgent

    CVWAgent

    CVWAgent

    CVW

    CVW

    CVW

    Multicast Agent Communication

    Multicast Net Agent Net

    CVW-

    AgentGUI

    CVW Core

    Agents

    ASAP

  • 8/3/2019 Autonomous Agents in Collaborative Virtual Environments

    5/8

    5

    Giving a tour: As virtual environments usually consistof many rooms and places, the guide can supply a tour

    and explain the most important features. The

    GuideAgent therefore contacts an InformationAgent,

    which has all relevant sites, their position within the

    environment and the relating facts in its database. Theuser has the possibility of following the guide on his own

    as well as choosing the option that the guide drives the

    user's graphical representation. The tour can be quit at

    any time.

    Information about participants: Participants have thepossibility to be additionally represented by UserAgents.

    These agents have more information about their owner,

    for example real and company name, real world location,

    and interests. The GuideAgent keeps track of visitors to

    the workspace and offers the possibility to get a directory

    of participants and their current position in the

    environment. The users can get the information aboutother participants through contacting the guide which

    asks the individual UserAgent about their employer to

    receive the desired facts.

    Another strength of the GuideAgent is its capability to

    model the user's behavior and preferences. This refers to

    keeping track of preferred means of interaction as well as

    recognizing when help is needed. Interaction modes are

    multiple.

    The field of applications is very wide: not only static

    information about objects can be administrated, but also

    dynamic and constantly changing situations (for example

    people participating in the environment) can be handled.An example application for the first scenario is the

    purpose of a guide agent in a virtual museum: it knows all

    the information and facts about the displayed pieces of art

    and recognizes objects of higher interest on its own when

    the user stays longer or requests more information about a

    certain exhibit. These preferred objects are incorporated in

    the user profile and the guide can adjust its route

    accordingly. An example for the dynamic situation is a

    company: When you wait for a visitor to contact you, you

    are bound to your desk, which means you can not prepare

    the next demo or gather all other persons involved in a

    meeting. Instead an agent can be responsible to guide thevisitor from the receptionist to the meeting room to save

    time. If more people are involved, then this comes to full

    effect as the agents take over the task of scheduling as well

    as gathering and guiding people to a certain place.

    The convenient ways of sharing information are essential

    among agents and, at the same time, the major advantage

    of using them instead of fixed pieces of software. Not only

    the introduction of new agent types and services is easier,

    but their cleverness and adaptability offer new means of

    using agent technology in virtual worlds to enhance

    usability and comfort.

    5. APPLICATIONS

    The integration of agents into collaborative virtual

    environments provides an interesting field of research in

    computer science. This chapter describes the two

    underlying applications, which were put together by the

    development of agents (see Chapter 4).5.1. ASAP

    To simplify our work we use A Simple Agent Platform

    (ASAP) to build our agents. ASAP provides agent

    templates to enable the programmer to develop software

    agents easily. During runtime agents use the capabilities of

    ASAP: A facilitator, being part of the agent society itself,

    offers information about services of other agents. Different

    conditioners inform agents about system dependent events

    or changes in the state of the computer. Integrated

    networking allows communication over different kinds of

    networks.

    ASAP is an agent platform developed by the InternationalComputer Science Institute (ICSI) and Fraunhofer-

    Institute for Computer Graphics (FhG-IGD) and is written

    entirely in the Java language. ASAP is a framework,

    which helps to develop new agents in an easy and

    uncomplicated way by providing agent templates. The

    execution of these new agents takes place in ASAP's

    runtime environment. The user gets a convenient way to

    keep track about all agent actions as this environment

    provides a graphical user interface (GUI).

    Figure 4: Overview of all components in ASAP

    All components of ASAP communicate among each other

    through the use of events. This process is based on a

    general broadcast of all messages and events to ensure that

    all other agents are notified. Figure 4 shows the general

    overview of all components in ASAP.

    Local Agent 1

    Local Agent 2

    Local Agent 3

    Agent

    Controller

    Message Handler 1

    Message Handler 2

    Message Handler n

    Event Handlers

    Conditioner

    Ext. Command

    Network

    Conn. Type A

    Conn. Type B

    Conn. Type M

    Facilitator

    ASAP Core Module

    Agent NameService

    Other

    ASAP

    Core

    Modules

    Events

  • 8/3/2019 Autonomous Agents in Collaborative Virtual Environments

    6/8

    6

    The ASAP Core Module represents one agent society. The

    Agent Controller is the runtime environment providing the

    general user interface to monitor the local agents. In

    addition, the controller offers the possibilities to access

    system resources. Conditioners alert an agent when a

    specified event occurs (for example a time event in thecase of a scheduling agent), or give information about

    system resources (e.g. disc space, system load). External

    Commands provide the interface to non-agent programs,

    which then can be used by the agent or its user.

    To contact other agent societies, no matter if they are other

    ASAP Core Modules or a third-party platform, ASAP uses

    the idea of integrated networking. Different types of

    Connections are responsible for a reliable message

    transmission. Networks in this case are standard telephone

    lines, ISDN connections, ATM links, and the TCP/IP-

    based Internet. Due to the open structure of ASAP new

    Connection Types can easily be implemented and usedupon request. During runtime, the Network object

    autonomously chooses one available connection type to

    contact other agents.

    Agents know each other only by their name, not by

    address. The Agent Name Service converts identification

    strings (names) to physical addresses, which reflect the

    actual network identifier (e.g. address, port).

    5.2. Virtual Emergency Task Force and VirtualShowroom

    In the VETAF scenario, a group of experts located

    throughout the world meets to discuss a global crisis in a

    virtual environment. It is specially designed to supporttheir cooperation, whereas the Virtual Showroom is

    mainly intended to demonstrate how virtual environments

    can be used in advertising and presentation.

    In the scenario, the setting is a virtual room with a 3D-

    model of the object of interest suspended in the middle of

    the room (Figure 5). The 3D-model in the virtual

    environment has different levels of detail and it can be

    edited. The possibility to replace it is provided by loading

    other objects into the scene (supported formats include

    VRML, 3DS, NFF).

    The environments are enabled to map additional data onto

    the walls. Projection of static data (e.g. blueprints and

    diagrams) and video-recordings are practicable. Video

    streams are broadcasted into the virtual environment by

    other participants or rescue teams on location in the case

    of VETAF. Displaying standard Microsoft Windows

    applications (PowerPoint, Word) or distributed

    whiteboards is also being put into practice.

    Figure 5 Virtual Showroom

    All participants can talk to each other with full duplex

    spatial sound. A chat facility makes it possible to connect

    to other participants on low bandwidth networks.

    Technical RealizationThe VCE environment can be separated into three major

    components: the main VCE (internal communication)

    application, the agent platform, and the multimedia

    delivery platform. We use the MBone tools for audio and

    video communication.

    The main VCE application is realized with WorldToolKit,

    a portable, cross-platform software development system for

    building high-performance, real-time, integrated 3D

    applications. VCE renders the virtual environment and

    handles the input/output devices (e.g. space mouse,

    monitor, shutter glasses). Any transformation or

    movement of entities in the environment is sent asprotocol-data-unit (PDU) packets to every other

    participant. Communication between multiple participants

    is based on the IP multicast protocol.

    The spatial audio server, developed at Fraunhofer IGD in

    Darmstadt, is based on a client-server architecture.

    Different audio sources (e.g. Internet phone, Mediaplayer)

    can be related to objects in the virtual scene. These audio

    sources are connected as clients to the audio server, which

    renders these audio signals depending on the position and

    orientation of each source in the virtual scene. The

    VETAF application transmits this position and orientation

    data via a socket connection to the audio-server.5.3. Other Agents and Future Work

    The main task during the development of agents for VR

    applications is the exploration of new forms of interaction

    with objects and persons in virtual environments like

    CVW. This environment shows how 3D graphics can be

    used in conference systems in a way that the visitor of the

    conference does not have to miss convenient usability and

    accustomed surroundings. In the continuing work the

    following types of objects will be evaluated:

    Before it comes to interaction, the GuideAgent has toget in touch with the visitors. The agent will be equipped

  • 8/3/2019 Autonomous Agents in Collaborative Virtual Environments

    7/8

    7

    with different triggers which lead it to recognize a needy

    user: if someone gets lost, people tend to stand still, turn

    around themselves to get an overview, or run around the

    scenario without any destination. The GuideAgent has to

    recognize this multitude of different situations and offer

    help. At the same time, the agent has to considerprevious experiences with the individual user in order to

    distinguish between situations when help is needed and

    when not. The agent must not bother the visitors and get

    on their nerves by constantly asking if they request help

    but at the same time should always be around in case a

    question arises. The users themselves will be able to

    contact the guide by either approaching it directly,

    contacting a service point or wait for the agent to

    recognize the "help situation".

    Simple objects like walls, which receive an additionalfunctionality by projecting information and by providing

    a user interface to web browsers and other commonapplications, are considered. The underlying agent

    controls the application and ensures the correct service is

    delivered. One example is an advanced search capability

    by contacting other agents in combination with common

    search engines. Another example is the possibility to

    display advertisements, which reflect the users

    preferences. The AdvertAgent therefore contacts the

    UserAgent and presents special offers depending on the

    user's interests.

    In the first implementation step, a DoorAgentcontrolling access to virtual rooms has been developed

    (see [7] for more information). It is planned to enhancethis agent by applying a keyhole metaphor. In addition,

    the ability to push messages through beneath a door is

    considered. By using this function, a visitor of the

    environment does not have to enter a charge room

    himself in order to contact somebody.

    The most interesting research task concerns generalobjects which can be brought into the environment.

    Those so-called black boxes are universal tools, which

    get a functionality and behavior by external programs.

    The main problem here is the generality of the objects. It

    is intended to make them as universal as possible by

    specifying a general purpose API to develop thepossibility for everybody to build agent-object pairs using

    ASAP and integrate these pairs into a virtual

    environment.

    Other possible object-agent classes include cashiers,

    hostesses, reporters, and librarians. New agent concepts of

    secure transmission of personal data and intelligence by

    neural networks have to be evaluated. Cloning of objects

    and agents is another interesting field of our research.

    6. EVALUATION

    The evaluation was performed by asking 12 computer

    natives and developers about the usability of the virtual

    collaborative environment (CVE) without and with the aid

    of guide agents.

    6.1. CVE without Agents

    When people enter a virtual world, they usually dont

    know who is present and what to do in the scenario. This

    is the main point, which was criticized. Participantsdemand the implementation of a help desk or a

    receptionist. They want maps about the building to see

    which rooms may be interesting, where people with the

    same interests gather, and to show them a way for

    orientation purposes. To help them with navigation, it is

    useful to have maps at every junction or central point.

    Signposts need to be present as well. It is important for the

    users to have the possibility to get information at all times,

    as it is the case with directories displayed on walls.

    Another point of interest is timetables, so that people can

    check where their meeting takes place at which time. It

    should also give an overview about the participants withthe possibility to get more information about them on

    demand. Interaction with other persons is also requested.

    This is not only for social purposes but also an additional

    way of getting information.

    People had difficulties with large worlds, as there is no

    possibility to travel large distances at a short time. The

    scenario lacks teleporters, which transport the

    participants from one point to a desired destination in no

    time. (Several persons mentioned the lacking of this

    feature individually.)

    Over all, people tended to a setting very close to reality,

    where sales personal, guides, receptionists, elevators, and

    maps are common and helpful.

    6.2. CVE with Agents

    After integrating agents into our collaborative virtual

    worlds, people are much more satisfied concerning

    usability and navigation. They like the constant

    availability of a guide agent and its way of interacting.

    They emphasize that it is very important for the agent to

    be polite and to answer reliably. The hierarchical

    arrangement of information and therefore fast access to a

    wanted fact is very important.

    The navigation help of the new agent was highlywelcomed. As described in Chapter 4, the agent offers the

    possibility to guide the user to a person or room in the

    scenario. People liked the possibility to leave this guidance

    at any time on their own whenever something interesting

    appears on their way, and to continue later on. Probands

    welcome the implementation of the tour feature, providing

    detailed information about the setting.

    But there are still problems to be solved: First, the agent

    has to adapt on users and their behavior in case they need

    help. Currently, some users are annoyed when the guide

    repeatedly asks them if they request aid. Second,

  • 8/3/2019 Autonomous Agents in Collaborative Virtual Environments

    8/8

    8

    interaction with buttons is not enough. People requested a

    more natural inaction possibility like speech recognition.

    Third, the functionality of the guide agent should be

    extended. Users lack the possibility to send short notices to

    other participants through the use of the guide. Currently,

    there is no possibility for one user to tell another thathe/she is coming to a meeting point and that the other

    person is supposed to wait there. The tours and tutorials

    also have to be designed more complex in order to provide

    more information and offer more possibilities to the user.

    Forth, the teleporter function as an alternative to the

    regular tour has to be implemented.

    7. RELATED WORK

    In the Virtual Polis project (Carnegie Mellon University of

    Pittsburgh, PA, USA), agents are used to popularize the

    simulated world with people and animals. The main point

    of interest is a simulation of behavior close to reality [8].

    At the University of Southern California virtual realitiesare used to visualize air combat simulations [5]. Every

    helicopter and plane is linked to an agent, which controls

    the correct execution of the individuals mission. Team

    coordination between agents, and recognition and

    correction of occurring errors are the main aims in this

    project. The schematic visualization is used to control and

    monitor the resulting activities of agents.

    8. CONCLUSION

    Fraunhofer IGD is focusing its research efforts on

    determining how collaborative virtual environments can

    help transform the workplace into a shared environment,

    allowing real-time interaction between people regardless oftheir physical location.

    Current research includes work on a general architecture

    for applications in collaborative virtual environments. This

    architecture supports current applications as well as

    research in the areas of distributed simulation, agents in

    virtual environments, and networking for large-scale

    virtual environments.

    Especially agents play a significant role in the

    maintenance and processing of large amounts of data. The

    importance of this technology will extend across many

    different application domains. The ongoing work explores

    the application of agents in virtual worlds. The problemdomain includes both simple tasks, such as management of

    projection walls, as well as complex processes, such as

    path planning and controlled access to resources. All of

    these tasks can be managed, simplified and made

    accessible to the user by the use of agents.

    In this article we introduced the integration of

    collaborative virtual environments and intelligent agents to

    enhance the usability of 3D user interfaces. At the present

    time, stable prototypes of the virtual environment system

    VCE and the agent platform ASAP exist. The scenario

    VETAF was demonstrated at several trade shows (e.g.

    SIGGRAPH 97 in Los Angeles, ACM 97 in San Jose, and

    the G7 Meeting in Bonn). By integrating both these

    research projects, this work creates a framework for

    extending objects in virtual cooperative environments by

    high level behaviors. Using the resulting system, different

    agent behaviors and their utility to the users can beevaluated. Experiments with the already implemented

    agents VRDoorAgent and VRDeputyAgent showed a

    significant simplification of the tasks and increased

    usability for the user. These improvements were made

    possible by the combination of current work in two

    important areas of Computer Science.

    REFERENCES

    1. Brutzman, D. Graphics Internetworking: Bottlenecksand Breakthroughs. Digital Illusions, C. Dodsworth,

    ed., Addison Wesley, Reading, Mass., 1996.

    2. Finin, T., Fitzson, R., McKay, D., and McEntire. R.

    KQML as Agent Communication Language. ACMPress, November 1994.

    3. Gilbert, D., and Janca, P. IBM Intelligent Agents. IBMCorporation, Research Triangle Park,NC, USA. 1996.

    4. Grff, A., Fiebig, T., Schiffner, N., Cross, R., andMacedonia, M. Virtual Emergency Task Force, VETAF

    2047. Computer Graphics TOPICS, 1/97.

    5. Kaminka, G.A., and Tambe, M. Social Comparison forFailure Detection and Recovery. Preproceedings of the

    4th International Workshop on Agent Theories,

    Architectures, and Languages (ATAL97), Providence,

    RI, USA. 1997.6. Paul, C., Spriestersbach, A., Peters, R. Intelligente

    Agenten fr virtuelle Umgebungen (German version

    only). Proceedings of the Workshop on Agents,

    Assistants, Avatars (AAA), Darmstadt, Germany,

    October 1997

    7. Peters, R.; Graeff, A.; Paul, C.: Integrating Agents intoVirtual Worlds. In Proceedings of the International

    Workshop on New paradigms in Information

    Visualization and Manipulation. Las Vegas, NV,

    November 1997.

    8. SIMLAB. Virtual Polis. Technical Report of theSTUDIO for Creative Inquiry at Carnegie Mellon

    University, 1996, http://demios.rec.ri.cmu.edu/ files/

    polis/ index.html

    9. White, J. Mobile Agents White Paper. General Magic,1996, http://www.genmagic.com/ agents/ Whitepaper/

    whitepaper.html

    10.Wooldridge, M.J., and Jennings, N.R. IntelligentAgents: Theory and Practice. The Knowledge

    Engineering Review 10 (2). 1995