13
computer methods and programs in biomedicine 89 ( 2 0 0 8 ) 248–260 journal homepage: www.intl.elsevierhealth.com/journals/cmpb Mobile collaborative medical display system Sanghun Park a,, Wontae Kim b , Insung Ihm b a Department of Multimedia, Dongguk University, Seoul, Republic of Korea b Department of Computer Science, Sogang University, Seoul, Republic of Korea article info Article history: Received 11 September 2006 Received in revised form 28 September 2007 Accepted 12 November 2007 Keywords: Medical informatics applications Mobile computing Personal digital assistants Computer supported cooperative work Distributed user interfaces Ubiquitous computing abstract Because of recent advances in wireless communication technologies, the world of mobile computing is flourishing with a variety of applications. In this study, we present an inte- grated architecture for a personal digital assistant (PDA)-based mobile medical display system that supports collaborative work between remote users. We aim to develop a sys- tem that enables users in different regions to share a working environment for collaborative visualization with the potential for exploring huge medical datasets. Our system consists of three major components: mobile client, gateway, and parallel rendering server. The mobile client serves as a front end and enables users to choose the visualization and control param- eters interactively and cooperatively. The gateway handles requests and responses between mobile clients and the rendering server for efficient communication. Through the gateway, it is possible to share working environments between users, allowing them to work together in computer supported cooperative work (CSCW) mode. Finally, the parallel rendering server is responsible for performing heavy visualization tasks. Our experience indicates that some features currently available to our mobile clients for collaborative scientific visualization are limited due to the poor performance of mobile devices and the low bandwidth of wireless connections. However, as mobile devices and wireless network systems are experiencing considerable elevation in their capabilities, we believe that our methodology will be utilized effectively in building quite responsive, useful mobile collaborative medical systems in the very near future. © 2007 Elsevier Ireland Ltd. All rights reserved. 1. Introduction Since its foundation in the late 1980s, the field of scientific visualization has witnessed tremendous technical achieve- ments that enable scientists and engineers to effectively extract visual and meaningful information from huge, com- plicated, and abstract datasets. The field has continued to evolve rapidly in response to the arrival of new computing environments. One of the most attractive challenges in the visualization area nowadays is to exploit wireless computing technologies that may help build a ubiquitous, mobile visual- ization system, thereby supporting collaborative works. Corresponding author. Tel.: +82 2 2260 3765; fax: +82 2 2260 3766. E-mail addresses: [email protected] (S. Park), [email protected] (W. Kim), [email protected] (I. Ihm). So far, handheld mobile devices such as the personal digital assistant (PDA) and cellular phone have been regarded as inap- propriate scientific tools for a number of reasons, including small screen size, poor computational performance, limited wireless communication bandwidth, and reduced interaction capability. However, these mobile devices have undergone considerable elevation in their capabilities. These days, it is not difficult to see a PDA that supports a video graphics array (VGA) resolution of 640 × 480 pixels. The wireless connection bandwidth for mobile devices continues to expand rapidly, which has facilitated the ubiquitous nature of wireless com- munication. For example, a commercial service for ubiquitous 0169-2607/$ – see front matter © 2007 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.cmpb.2007.11.012

Mobile collaborative medical display system

Embed Size (px)

Citation preview

Page 1: Mobile collaborative medical display system

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 8 9 ( 2 0 0 8 ) 248–260

journa l homepage: www. int l .e lsev ierhea l th .com/ journa ls /cmpb

Mobile collaborative medical display system

Sanghun Parka,∗, Wontae Kimb, Insung Ihmb

a Department of Multimedia, Dongguk University, Seoul, Republic of Koreab Department of Computer Science, Sogang University, Seoul, Republic of Korea

a r t i c l e i n f o

Article history:

Received 11 September 2006

Received in revised form

28 September 2007

Accepted 12 November 2007

Keywords:

Medical informatics applications

Mobile computing

Personal digital assistants

Computer supported cooperative

work

Distributed user interfaces

Ubiquitous computing

a b s t r a c t

Because of recent advances in wireless communication technologies, the world of mobile

computing is flourishing with a variety of applications. In this study, we present an inte-

grated architecture for a personal digital assistant (PDA)-based mobile medical display

system that supports collaborative work between remote users. We aim to develop a sys-

tem that enables users in different regions to share a working environment for collaborative

visualization with the potential for exploring huge medical datasets. Our system consists of

three major components: mobile client, gateway, and parallel rendering server. The mobile

client serves as a front end and enables users to choose the visualization and control param-

eters interactively and cooperatively. The gateway handles requests and responses between

mobile clients and the rendering server for efficient communication. Through the gateway,

it is possible to share working environments between users, allowing them to work together

in computer supported cooperative work (CSCW) mode. Finally, the parallel rendering server

is responsible for performing heavy visualization tasks. Our experience indicates that some

features currently available to our mobile clients for collaborative scientific visualization are

limited due to the poor performance of mobile devices and the low bandwidth of wireless

connections. However, as mobile devices and wireless network systems are experiencing

considerable elevation in their capabilities, we believe that our methodology will be utilized

effectively in building quite responsive, useful mobile collaborative medical systems in the

very near future.

(VGA) resolution of 640 × 480 pixels. The wireless connection

1. Introduction

Since its foundation in the late 1980s, the field of scientificvisualization has witnessed tremendous technical achieve-ments that enable scientists and engineers to effectivelyextract visual and meaningful information from huge, com-plicated, and abstract datasets. The field has continued toevolve rapidly in response to the arrival of new computingenvironments. One of the most attractive challenges in the

visualization area nowadays is to exploit wireless computingtechnologies that may help build a ubiquitous, mobile visual-ization system, thereby supporting collaborative works.

∗ Corresponding author. Tel.: +82 2 2260 3765; fax: +82 2 2260 3766.E-mail addresses: [email protected] (S. Park), matt99@grmane

0169-2607/$ – see front matter © 2007 Elsevier Ireland Ltd. All rights resdoi:10.1016/j.cmpb.2007.11.012

© 2007 Elsevier Ireland Ltd. All rights reserved.

So far, handheld mobile devices such as the personal digitalassistant (PDA) and cellular phone have been regarded as inap-propriate scientific tools for a number of reasons, includingsmall screen size, poor computational performance, limitedwireless communication bandwidth, and reduced interactioncapability. However, these mobile devices have undergoneconsiderable elevation in their capabilities. These days, it isnot difficult to see a PDA that supports a video graphics array

t.sogang.ac.kr (W. Kim), [email protected] (I. Ihm).

bandwidth for mobile devices continues to expand rapidly,which has facilitated the ubiquitous nature of wireless com-munication. For example, a commercial service for ubiquitous

erved.

Page 2: Mobile collaborative medical display system

i n b

pmtfu

tItisvccdrwt

cfccwpa(

cpmmtarettsteo

redivfc

2

Ttrssc

c o m p u t e r m e t h o d s a n d p r o g r a m s

ortable Internet access is planned in Korea that providesobile users with 1–3 Mbps of bandwidth. Such changes in

he mobile computing environment demand a new paradigmor building a mobile visualization system that supports ubiq-itous and collaborative work.

One of the many application fields that could benefit fromhese technological changes is the telemedicine community.n particular, rapidly advancing mobile technology is expectedo offer a critical technological base in building a remote med-cal care and related information management system thatupports an anytime, anywhere, and anyone access. In pro-iding such a telemedicine service, the computer supportedooperative work (CSCW) feature, which enables interactiveooperation among remote users, may increase its usabilityrastically. For example, doctors may be able to treat theiremote patients interactively, sharing a tailor-made virtualorking environment through a home care telemedicine sys-

em.In an effort to explore these new technologies for medi-

al informatics, we have developed an integrated architectureor a PDA-based mobile display system allowing ubiquitousomputing. Our emphasis is on the development of a medi-al system that enables users in different regions to share aorking environment for collaborative visualization with theotential for exploring a huge volume of medical data, suchs computed tomography (CT), magnetic resonance imagingMRI), or positron emission tomography (PET) data.

In this paper, we deal with technical aspects that must beonsidered in designing a mobile collaborative medical dis-lay system. Our system consists of three major components:obile client, gateway, and parallel rendering server. Theobile client serves as a user interface and enables the user

o choose the display and control parameters interactivelynd collaboratively. The gateway takes care of requests andesponses between mobile clients and the rendering server forfficient communication. Through the gateway, it is possibleo share working environments between users, allowing themo work together in CSCW mode. Finally, the parallel renderingerver is responsible for performing heavy visualization taskshat are beyond the capability of current mobile devices. Wexplain our system architecture and discuss our experiencebtained from its implementation.

The paper is organized as follows. Section 2 reviews someelated studies on PDA-based visualization. In Section 3, wexplain in detail how our mobile medical system has beenesigned and implemented on PDAs, and demonstrate that

t can be used effectively for building a cooperative mobileisualization environment. In Section 4, we briefly discuss per-ormance issues by presenting some experimental results, andonclude the paper in Section 5.

. Related studies

o handle the computational limitation of mobile devices andhe low bandwidth of data transmission networks, often expe-

ienced in the development of mobile visualization systems,everal approaches have been proposed. In Refs. [1,2], twoolutions for remote visualization systems, involving low-endomputing devices, were studied, where centralized hardware

i o m e d i c i n e 8 9 ( 2 0 0 8 ) 248–260 249

resources of high performance were exploited to allow usersto visualize large scientific datasets. A general client–servermodel was utilized to design a remote rendering system [3].In that work, the techniques of adaptive rendering and datatransmission were used to overcome the low performance ofthe computing hardware and the transmission network. Animage-based rendering method was applied to the displayof large polygonal models on mobile devices at interactiverates, 5–6 frames/s [4]. Various design principles and method-ologies were also studied in several works [5–8], endeavoring todevelop effective mobile visualization systems. An interestingapproach was recently presented in Ref. [9], where high framerates are achieved through the help of high-performance par-allel rendering based on PC clusters and progressive datatransmission based on MPEG stream. While all these men-tioned systems were designed to optimize the visualizationand remote display computation in the respective comput-ing environment, the concept of collaborative work betweenremote users was not fully applied to building their remotevisualization systems.

OpenGL Vizserver from Silicon Graphics, Inc. (SGI) is acommercial, client–server solution [10]. It is a technical andcreative computing system designed to deliver visualizationand collaboration functionality to clients, whether on a desk-top workstation or a wireless handheld computer. OpenGLVizserver allows users to remotely view and interact with largedatasets from any other system at any location in an organi-zation and to collaborate with multiple colleagues using thesame application data. The rendered scenes are transportedas streams of compressed pictures from the server to the dif-ferent client. Because of its design, OpenGL Vizserver worksonly with the SGI Onyx family, limiting its applicability in thegeneral computing environment.

Another approach for an integrated remote visualizationand collaborative system was attempted in Ref. [11]. For thevisualization task, it harnesses remote servers to produce thecomputation results as video streams, separately for each par-ticipating partner. Thus, the mobile devices can only playMPEG-4-compliant video streams, commonly provided to off-the-shelf PDAs and laptops. For collaborative support, they usea full-featured CSCW system. This approach differs from oursin that they focus on integrating facilities for exploring com-plex datasets by using professional visualization tools in theirknowledge space.

As mentioned before, telemedicine is a promising fieldto which mobile technology can be applied effectively.Telemedicine scenarios include in-hospital care manage-ment, remote teleconsulting, collaborative diagnosis, andemergency situation handling. Personal mobile telemedicinesystems using wireless communication links have been stud-ied extensively [12–17].

Other types of mobile-based visualization systems havealso been introduced, including a context-adaptive scalableinformation visualization and management system, whichenables mobile computing [18]. The system intends to sup-port the operation and the workflow of wastewater systems.

The IMoVis project demonstrated how mobile devices maybe applied in security systems [19], where PDAs are usedto visualize intrusion detection data. The mobile augmentedreality system was also designed to provide users with
Page 3: Mobile collaborative medical display system

s i n

250 c o m p u t e r m e t h o d s a n d p r o g r a m

geo-referenced information required to accomplish their envi-ronmental management tasks via PDAs [20]. An approach forpresenting information on handheld devices that uses infor-mation visualization techniques for displaying and navigatingstructures of complex information spaces has been introduced[21]. A framework for mobile 3D visualization and interac-tion in an industrial environment was presented in Ref. [22].This project deals with the seamless navigation and provisionof multimodal, context-sensitive, speech-enabled, augmentedreality interfaces for mobile maintenance.

Although many similar approaches have been presented inthe literature, our system is unique in that it supports collab-oration (the ability to communicate and work with remotelylocated users) as well as a level-of-detailed feature for high-resolution volume data.

3. The proposed system

3.1. Design overview

As described in the previous section, several mobile visu-alization systems have been proposed and implementedsuccessively. Some systems have shown an almost real-timeperformance in remotely visualizing large volume data, forexample, refer to systems developed independently fromours [9]. Essentially our system pursues similar goals to theprevious systems, but focuses more on the following charac-teristics:

• A simple token protocol is designed for session-basedCSCW. Through this protocol, it is possible to open multiple

independent sessions at a gateway. Users that participatein a common session can share the visualization tasks. Thesystem architecture is designed so that the gateway compo-nent that works as a broker between the mobile client and

Fig. 1 – The system architecture: it consists of mobil

b i o m e d i c i n e 8 9 ( 2 0 0 8 ) 248–260

the rendering server is utilized in a way that optimizes thecommunication performance in a given environment.

• The level-of-detail concept is implemented in the mobilesystem so that a user can effectively select and exploresubvolumes of interest through arbitrarily oriented cuttingplanes. A local cache structure is designed to minimizethe volume extraction and transmission costs so that thedisplay operation can be performed efficiently using thenot-so-fast OpenGL ES API on a PDA.

• During data communication through a gateway, we do notconsider a lossy compression scheme because the fidelityof medical imaging data is often critical. It is assumed thatimage quality is a design factor that is more important thanthe processing time. Hence, such techniques as JPEG andMPEG are not considered in our system.

• Basically, the system is designed to be independent,as much as possible, of high-level toolkits, libraries, orprotocols, thus offering a high level of portability and exten-sibility. The current implementation uses such low-levellibraries as OpenGL ES and socket-based TCP/IP. While theparallel rendering server module uses the MPI library toimplement a traditional CPU-based ray-casting algorithm,any other parallel rendering methods including GPU-basedray-casting or chromium-based parallel rendering can beadopted.

The current system does not provide a real-time renderingperformance as it is based on CPU-based ray-casting. How-ever, when the parallel rendering module is replaced by a fastparallel renderer as in Ref. [9], our system will also offer highrendering performance.

Fig. 1 illustrates an overview of our mobile medical dis-play system that consists of three main components: mobileclient, gateway, and parallel rendering server. The system,based on a client–server architecture, aims to offer efficient

e client, gateway, and parallel rendering server.

Page 4: Mobile collaborative medical display system

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b

aurptttmic

acrttcctc

task. In return, the PDA receives the final rendering images or

FBmr

Fig. 2 – Mobile client modules.

nd collaborative access to a large set of raw medical vol-me data, such as found in CT, MRI, and PET, that is stored inemote archives. One of the most important design factors is torovide users with both interactivity and mobility in visualiza-ion. To achieve this goal, a parallel rendering server, abstracto users, is harnessed to perform the heavy 3D visualizationasks, allowing a fast response to the user’s request. Thus, the

obile client is free from CPU-intensive computation, focus-ng the user on user interfacing and collaboration with otherlients.

For smooth communication between a client and the par-llel rendering server or other clients, we introduce a gatewayomponent that works as a broker. The gateway is primarilyesponsible for delivering requests collected from clients tohe rendering server and multicasting the returned visualiza-ion results back to the clients. Furthermore, it controls theollaboration by maintaining contexts shared by clients. The

ommunication between the components is performed usinghe socket-based TCP/IP protocol, which allows heterogeneouslient devices to connect to the system stably and efficiently.

ig. 3 – Snapshot of mobile client’s graphical user interface. Pictu-spline (NURBS) curve is being designed to set up a camera pathovie. In picture (b), a 3D-rendered CT image is being displayed

esolution of the created image is higher than that of the PDA’s s

i o m e d i c i n e 8 9 ( 2 0 0 8 ) 248–260 251

3.1.1. Mobile clientThe mobile client serves as an interface to the entire systemthat abstracts users from the implementation details. Mainly,it handles efficient interaction with users and displays imagesand movies sent from the rendering cluster. The client com-ponent, executed on a PDA, consists of the four submodulesas illustrated in Fig. 2: interface module, control module, net-work module, and rendering module. When a mobile clientconnects to the system for the first time, the control modulecollects relevant information through the network module,including the details of available volume datasets. Once theuser selects a medical dataset to be processed, it preparesto start a visualization session. The interface module takescare of the user’s actions by detecting input events and callingnecessary callback functions. Through this interface, it is pos-sible to perform many functions including: choosing a propervolume dataset, manipulating the selected data interactively,choosing the camera and light parameters, sending the ren-dering context, issuing a rendering request, and displayingrendered images. On the other hand, the rendering moduleinteractively (at least 10–15 frames/s) displays subvolumes ofinterest using 2D texture-based rendering methods.

Once transmitted into the mobile client from the vol-ume data repository as requested by the user, the raw CTor MRI data are stored in the form of 3D arrays for effec-tive display that allows the user to scrutinize them withuser-controllable cutting planes with arbitrary orientations. Toobtain 3D-rendered images corresponding to a specific volumeregion, the user requests the parallel rendering server respon-sible for heavy rendering computations, for the visualization

MPEG-encoded movies, and displays them on its screen.In spite of the recent performance enhancement of mobile

devices, it is still critical to make every effort to optimize the

re (a) shows an example where a nonuniform rationalthat will be applied to the production of an animation

as received by the parallel rendering server. In case thecreen, the user can pan or zoom with a pointing stick.

Page 5: Mobile collaborative medical display system

252 c o m p u t e r m e t h o d s a n d p r o g r a m s i n

Fig. 4 – Gateway modules.

Fig. 5 – Single gateway versus multiple gateways. When the numthe gateway becomes a bottleneck in the system. By employinggateway, as in (a), it is possible to make the load of the gateway

b i o m e d i c i n e 8 9 ( 2 0 0 8 ) 248–260

amount of computation carried out on the client side. In dis-playing a possibly large volume dataset on a screen of limitedresolution, it is not wise to use the volume dataset of full reso-lution. In our framework, a multiresolution representation ofeach volume dataset is stored in the volume data repositories,managed by the parallel rendering server, and a set of sub-blocks with the proper level of detail, selected according to theviewing parameters and available client’s memory, is trans-ferred to the client. The data are reloaded into the client onceevery time the required level changes through a zooming oper-ation. In this way, the usage of limited memory and wireless

network bandwidth is optimized. See Fig. 3 for examples ofuser interface where a filtered volume dataset is manipulatedand a rendered image is displayed.

ber of participating clients increases, the chances are thatmultiple gateways, as in (b), instead of using a singlecomponent well balanced.

Page 6: Mobile collaborative medical display system

i n b

3Tttccbrtatrtpatw

tsucsmrioala

anTma

3Ttwrttt

c o m p u t e r m e t h o d s a n d p r o g r a m s

.1.2. Gatewayhe gateway is the central processing component that con-

rols the entire system. The major role of this component iso drive the rendering cluster to perform parallel volume ray-asting with the rendering parameters received from mobilelients, and multicast the computed images or MPEG moviesack to the clients. It also serves as a buffer that holds a set ofaw subvolumes of selected level-of-detail values and delivershem to the clients as necessary. When a mobile client definescamera path for the production of an animation movie (refer

o Fig. 3(a)), the gateway issues the corresponding renderingequest at each camera point whose interval is controlled byhe client. Then it receives multiple rendered images from thearallel rendering server, encodes them in an MPEG movie,nd delivers it to the clients. The final task of the gateway iso coordinate the collaborative work between mobile clients,hich will be explained in more detail in Section 3.2.

As illustrated in Fig. 4, the gateway is made up of the con-rol module, network module, client management module,ession management module, and data management mod-le. The client management module handles each individuallient, while the session management module manages activeessions to support the CSCW feature. The data managementodule plays a key role in retrieving volume data from the

endering server and filtering it so as to fit the volume sizento the mobile client memory. When a client selects a regionf interest, if the volume size is larger than the PDA’s avail-ble memory, then the gateway determines the appropriateevel of detail dynamically and reduces the volume data sizeccordingly.

Importantly, when several mobile clients or sessionsttempt to access a gateway at the same time, the commu-ication needs may cause a severe bottleneck (see Fig. 5(a)).o cope with this problem, we designed our system to allowultiple gateways to manage a group of sessions separately,

s depicted in Fig. 5(b).

.1.3. Parallel rendering serverhe parallel rendering server serves as computational clusters

hat handle the rendering requests delivered through the gate-ay as illustrated in Fig. 6. As it is invisible to the clients, it may

un on a supercomputer or fast PC clusters, interacting withhe gateway. The server manages the volume data repositorieshat store the original volume datasets with their multiresolu-ional representations. For 3D volume rendering, our system

Fig. 6 – Parallel rendering server modules.

i o m e d i c i n e 8 9 ( 2 0 0 8 ) 248–260 253

harnesses a PC cluster that runs a parallel volume ray-castingalgorithm designed for distributed memory architectures [23],while any other efficient parallel rendering technique may beused.

3.2. CSCW

To support a complete cooperative environment, sometimesvisualization systems have to exchange various imagingdatasets and need efficient hardware accelerated volumerendering techniques for displaying the three-dimensionalrepresentation in real-time. However, it is difficult to imple-ment such full-featured cooperative visualization techniquesfor huge and complex medical datasets on mobile devicesbecause of their limited resources.

In our approach, the synchronization problem is solvedthrough the gateway that always manages any updates forrendering contexts as well as volume data in a session. Todescribe the usage of the presented system, the following sce-nario summarizes the cooperation process: all cooperationpartners who have access to a specific visualization can join asession through the gateway. When users contact the gateway,they can either select a session to join or create a new session.Depending on the working mode, the collaborators are able tosee the same view as each other or an individual view of thevisualization. Stored views, in which interesting regions aredisplayed, can be restored and broadcast immediately. Eachcollaborator who receives the broadcasting message can eitherupdate their own current view or maintain multiple viewsindependently. This method allows users in the same ses-sion to share medical data, rendering contexts, and renderedimages during the visualization task.

In introducing this ability for users to collaborate, it isimportant to avoid any disorder induced by random, simulta-neous issues of the broadcasting messages by multiple users.Therefore, we have designed the session management modulein such a way that only one user is given a token per sessionthat authorizes its holder to issue a broadcasting message. Theauthority as a host may be turned over to other users in thesame session by passing the token.

3.3. Scenarios of use

To describe the mechanisms of a progressive display ofremotely located large sets of medical data, we now introducea possible interaction example between a client and a gate-way, as illustrated in Fig. 7. After the mobile client process islaunched, a user connects to the gateway and sends the hard-ware configuration of the mobile device. In each request themobile client sends a message or signal to the gateway. Whenthe client requests an identifier, a unique identifier is returned,stored in the client device, and sent to the gateway in all sub-sequent requests. This operation enables the gateway to keeptrack of all sessions. When the user requests a new part ofthe volume of raw data of interest using the bounding box ofthe graphical user interface, the mobile client computes the

bounding box in volume coordinates. The client sends thisinformation to the gateway, which can retrieve an appropri-ate subblock from the database. Next, the gateway determinesa level-of-detail value, possibly based on the memory size of
Page 7: Mobile collaborative medical display system

254 c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 8 9 ( 2 0 0 8 ) 248–260

m fo

Fig. 7 – Example interaction diagra

the mobile client device as well as the volume resolution. Asthe gateway keeps track of which subblocks have already beensent to the client, it can deduce the range of volume neededto update the display and send them to the client.

Fig. 8 illustrates an example interaction sequence that maybe observed when a user wants to render volume data ofinterest. The snapshots in Fig. 9 show the volume renderingprocess step-by-step where 2D texture mapping and volumeray-casting methods are used in the mobile client and the par-allel rendering server, respectively. Once a client connects toa gateway using their IP address, they select a dataset fromthe available raw data list (pictures (a–c)). Then the client

determines a set of various rendering parameters through aninteractive navigation of the chosen medical data (pictures(d–f)). After the set up of a transfer function for the appropriateclassification of materials to be visualized, the client requests

Fig. 8 – Example interaction diagram for cl

r client–gateway communication.

a parallel rendering server to perform the heavy visualizationwork. Finally, the rendered image is displayed on the screen(pictures (g–i)).

Finally, Fig. 10 illustrates an example sequence where twoparticipating clients interact with each other for a CSCW pur-pose. In (a), the client on the left receives a signal from thecurrent host client on the right, and is asked if the clientwill accept the new context, broadcast by the host. The clienton the left may either ignore the message or choose to over-write his context with the new one with or without saving thecurrent context for the later restoration. On the other hand,(b) shows a situation where the host client is asked by the

client on the left to turn over the host token. In (c), after theacquisition of the authorization, the client on the left, a newhost, selects a menu item to broadcast his own 3D-renderedimage. After the client on the right decides to accept the mes-

ient–gateway–server communication.

Page 8: Mobile collaborative medical display system

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 8 9 ( 2 0 0 8 ) 248–260 255

. Th

si

4e

TwpwP(

Fig. 9 – Example sequence of volume rendering

age, the received image is displayed on the screen as shownn (d).

. Implementations and optimizationfforts

o demonstrate the effectiveness of the presented prototype,e have implemented the mobile collaborative medical dis-

lay system as described in the paper. For the mobile client,e used an HP iPaq hx4700 PDA equipped with 624 MHz IntelXA270 CPU, and 6.5k-color thin film transistor liquid crystalTFT LCD) display supporting 480 × 640. The device, running

e detailed explanation is described in the text.

Microsoft (MS) Windows Mobile 2003 for the Pocket PC oper-ating system, was connected to the gateway via a wirelessLAN or USB desktop synchronization cable. In developing themobile user interface, we used MS Embedded Visual C++ 4.0compiler, Vincent OpenGL ES [24], and MS Winsock.

The parallel rendering server was implemented and testedon an IBM PC cluster consisting of 15 nodes of 3.0 GHz IntelPentium4 processors and 2 Gb main memory. For communi-cation between the nodes, MPICH2 message-passing interface

was employed [25]. The mobile clients and the parallel ren-dering server were connected through a gateway for which anordinary desktop PC was exploited. For the test volume data,the Visible Human male CT with 512 × 512 × 1440 resolution
Page 9: Mobile collaborative medical display system

256 c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 8 9 ( 2 0 0 8 ) 248–260

Fig. 10 – Example sequence of CSCW to share rendering contexts and images. See the text for a detailed explanation.

Fig. 11 – Rendering images of Visible Human male CT data on a mobile client. With the assistance of the presented medicaldisplay system, it is possible for a mobile user to interactively examine large medical datasets, possibly working withremote collaborators. The CPU-intensive processing, such as 3D visualization of large sets of raw data, is hidden fromclients as such heavy computation is carried out by parallel rendering servers. This transparency makes the clients feel asthough they are using a high-end computing system. Note that the time required to display a 3D-visualized image after arendering request depends on a variety of factors including raw data sizes, classifications, rendering parameters, networktraffic, and the computation power of rendering servers.

Page 10: Mobile collaborative medical display system

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 8 9 ( 2 0 0 8 ) 248–260 257

Fig. 12 – Performance comparisons for a variety of optimization efforts on Intel PXA CPUs. (a) The frame rate per second (FPS)is compared for the three combinations of applying floating-point/fixed-point numbers and general assembler-level codeoptimization techniques. (b) As the magnitude of volume data loaded into a PDA with 64 Mb of memory increases, the framerate deteriorates. However, the difference is not substantial so long as the demanded memory size stays in the 2–16 Mbrange. (c) As the cutting plane goes farther, indicated by the distances in the horizontal axis, the screen area to be coloredg resuc pec

aetcdn

lotomamepaP

s

ets smaller, enhancing performance. (d) Together with theurrent PDAs is still a problem, although the fill process is ex

nd uptake capacity of 720 Mb was used. Fig. 11 shows somexamples of remotely visualized images that were created byhe parallel rendering server and then displayed on the mobilelient. See supplementary animation Movie 1(a) and (b)1 for aemonstration of where a parallel rendering server with eightodes was exploited.

When developing a software system on hardware withimited capacity, it is critical to make every effort for codeptimization. In coding our system, we tried several optimiza-ion techniques for Intel XScale processor-based PDAs, andbserved significant performance gains for some. As manyobile systems, based on the XScale processor, still are not

ccelerated by a floating-point unit, floating-point operationsust be avoided as much as possible. An alternative is to

mploy a fixed-point number that approximates the floating-oint number. Despite the limited range, the fixed-point data

re often effective enough to code graphic applications on aDA if the range is selected appropriately.

1 The animation movies are available at ftp://grmanet.ogang.ac.kr/pub/ihm/out/{mov1a.avi, mov1b.avi}.

lt in (c), this graph indicates that the pixel fill rate of theted to be assisted by an acceleration chip in the near future.

To enhance the processing speed on the mobile client side,we employed a 32-bit, fixed-point number where 16 bits areallocated to the fraction part. For multiplication and divi-sion operations, temporal 64-bit integer variables were usedto maintain maximum precision. In displaying the raw vol-ume data interactively, the frequently called trigonometricfunctions took a substantial portion of the floating-point oper-ations. To handle this, we used a lookup table whose elementswere generated for 0–90 ◦ at 1◦ intervals. We also appliedseveral assembler-level code optimization techniques man-ually for compiler-optimized assembler codes, especially forrepeatedly executed parts. Fig. 12(a) indicates that these opti-mization efforts were rewarding. As implied in the diagrams,the application of both fixed-point numbers and optimiza-tion techniques remarkably improved the frame rate, whilethe code optimization techniques for floating-point numbersproduced only a slight performance improvement.

The performance on the mobile client side is quite sensi-tive to the demands of various computational resources, and

hence the code must be tuned with great care. In an attempt toreveal the dependency, we collected some performance datafor several resources. As Fig. 12(b) shows, the size of volumedata loaded into the PDA’s memory affects the speed of the
Page 11: Mobile collaborative medical display system

s i n

258 c o m p u t e r m e t h o d s a n d p r o g r a m

interactive display of raw volume data. It is obvious that thesmaller volume size results in a higher frame rate, whereasthe performance does not vary much so long as the demandedmemory stays in the 2–16 Mb range (note that the tested PDAholds 64 Mb of memory).

Currently, the performance is also dependent on the num-ber of pixels that must be projected to compute an imagebecause the PDA is not accelerated by a graphics unit. In themobile client, the raw volume data are displayed using aninteractively controlled cutting plane where the 2D texture-mapping feature is exploited for computing the resultingimage. We have varied the distance of the cutting plane fromthe camera to vary the magnitude of pixels that need to bemapped onto the screen. As expected and demonstrated inFig. 12(c), the frame rate increases as the cutting plane goesfarther, that is, the number of pixels to be colored decreases.However, this performance variation is expected to be avoidedas a 3D graphics accelerator chip will be used in the nearfuture.

Fig. 12(d) reports how performance is affected by thescreen’s resolution. To evaluate this performance, we usedanother PDA with a smaller resolution of 240 × 320 pixels, andmeasured the frame rate under the same rendering context.

Finally, Fig. 13 shows some statistics that reveal how dif-

ferent block subvolume sizes affect the overall performance.First, the detailed analysis on the computational costs, neededto display level-of-detail images on the PDA screen, is givenin (a) in the figure. In our system, when a mobile client

Fig. 13 – Another performance analysis showing the effectof volume and screen sizes. (a) Level-of-detail visualization:volume resampling and transmission for different sizes ofsubvolumes; (b) remote parallel ray-casting for differentsizes of display windows (S: 240 × 320 pixels, L:480 × 640 pixels) and subvolumes.

b i o m e d i c i n e 8 9 ( 2 0 0 8 ) 248–260

selects a region of interest, the gateway module extracts thenecessary subvolumes from disks, performs the resamplingprocess so that the resulting image fits into the PDA’s memory(32 Mb in this experiment), and sends the (processed) imageto the client through the network. When the selected screenspace corresponded to a subvolume of 512 × 512 × 512 voxels,a resampling process was needed to reduce its size to less than32 Mb. In the other three cases, no resampling was necessary.As seen in the graph, the disk access time was a minor factorin the overall performance. On the other hand, as expected,the sizes of subvolume to be transferred from the gateway tothe mobile client greatly affected the processing time. Whilethe bandwidth of the tested wireless communication networkwas low, the data transfer problem is expected to be resolved,as a much faster network will be available in the very nearfuture.

The figure in (b), on the other hand, shows how the datasize affects the parallel volume rendering in our current sys-tem. Again, four cases of volume size from 64 × 64 × 64 to512 × 512 × 512 voxels were tested against two screen imageresolutions of 240 × 320 and 480 × 640 pixels, respectively. Inthis experiment, a rendering server made of nine PCs wasutilized to run our parallel volume ray caster. Except for thecase of 512 × 512 × 512 volume data, the entire response timeswere 2.0–3.3 s for the smaller screen resolution, and 4.7–8.8 sfor the larger screen resolution. The parallel rendering timeis dependent on the overall power of the participating PCs.Furthermore, while our current renderer is CPU-based, therecent GPU-based volume renderer provides an almost real-time frame rate for the moderate sizes of volume data. Hence,the rendering time will be expected to be a minor factor. Onthe other hand, as the test result indicates, the data transfertimes for sending rendered images to the mobile server takea substantial portion of the entire processing time.

Despite the relatively poorer system specification of thetested PDA model (HP iPaq h5550 with 400 MHz Intel PXA255CPU, and 6.5k-color TFT LCD display), the performance metricwas improved. In conclusion, the features currently availableto mobile clients for scientific visualization are limited dueto the poor performance of mobile devices. In particular, thebandwidth of wireless networks remains a major bottleneckin building an effective mobile display system, although it isexpected to increase significantly in the very near future. Forenvironments where wireless communication is still slow, itwill be important to minimize the delay caused by the tardycommunication.

Note that one of the many reasons (except the controllertask as a broker in the CSCW process) why we consideredemploying the gateway scheme in our architecture was totune the communication task so that the performance is max-imized in a given environment. For example, it might havebeen possible to encapsulate the work done by the gateway inthe rendering server. In that case, however, the remote servermust take up all the responsibility for maintaining the system.In particular, all communication messages need to be deliv-ered to the possibly very remote server system, which entails

a severe inefficiency.

On the other hand, the use of a gateway may improvethe system performance dramatically. For example, a clinicmay install a local gateway that supports a relatively high

Page 12: Mobile collaborative medical display system

i n b

sctphiidts

5

Iatcsllwttovtm

sabuimpettcnmti

rmiccmrrt

A

TG

r

c o m p u t e r m e t h o d s a n d p r o g r a m s

peed of wireless communication to and from the mobilelients, i.e., patients in the local area. In this environment,he collaboration work between doctors and patients can beerformed efficiently through local communication withoutaving to send the message to the rendering server, which

s geographically far away. Only the necessary messages, fornstance, for sending the rendering requests or receiving ren-ering results are sent through a communication line betweenhe local gateway and the possibly very distant renderingerver.

. Concluding remarks

n this paper, we proposed a client–server-based integratedrchitecture for mobile collaborative medical data visualiza-ion in which PDAs are used as the front end. The system,omposed of mobile client, gateway, and parallel renderingerver, aims to offer interactivity and mobility in visualizingarge medical datasets where remote users are allowed to col-aborate in shared contexts. One of the main emphases of thisork was to add the feature of collaboration and coordination

o the mobile distributed medical system, allowing coopera-ive work among remote users. Through the implementationf the presented system architecture and the application ofarious optimization and tuning techniques, we have shownhat low-end PDAs can be exploited effectively for such a

obile CSCW system.As analyzed in the paper, the system performance relies on

everal design factors. Above all, the features currently avail-ble to the mobile client for scientific visualization are limitedecause of the poor performance of mobile devices. In partic-lar, the low bandwidth of wireless communication networks

s a heavy bottleneck in building a real-time or interactiveobile collaborative visualization system. As explained in the

revious section, it is critical to design architecture that fullyxploits the advantage of the gateway component to overcomehe wireless communication problem. In our implementation,he gateway could handle no more than four or five mobilelients because of the poor performance of the tested wirelessetwork. However, the capacity may vary as the system perfor-ance increases. Hence, it will be quite important to carry out

he preperformance analysis when the system configurations determined in the design step.

While the low wireless communication bandwidth is a cur-ent bottleneck, it is getting wider; hence, quite responsive

obile collaborative medical systems will be able to appearn the very near future. We hope that the proposed mobileooperative scheme will be applied in a wide range of appli-ations. In telemedicine, for example, the expanded systemay enable doctors to access 2D images of patients’ raw data

emotely, to visualize 3D images interactively, and collabo-ate with others in making a diagnosis, even when they arehousands of miles away from the patient.

cknowledgements

his work was supported by the Korea Research Foundationrant funded by the Korean Government (MOEHRD) (KRF-

i o m e d i c i n e 8 9 ( 2 0 0 8 ) 248–260 259

2006-331-D00497) and Seoul R&BD Program (10672). This workwas supported by the Brain Korea 21 Project in 2007.

e f e r e n c e s

[1] S. Stegmaier, M. Magallon, T. Ertl, A generic solution forhardware-accelerated remote visualization, in: Proceedingsof Joint EUROGRAPHICS-IEEE TCVG Symposium onVisualization, May 2002, pp. 87–94.

[2] F. Lamberti, C. Zunino, A. Sanna, A. Fiume, M. Maniezzo, Anaccelerated remote graphics architecture for PDAs, in:Proceedings of Web3D 2003 Symposium, March2003.

[3] I. Martin, Adaptive rendering of 3D models over networksusing multiple modalities. Technical Report RC 21722 (Log97821), IBM T.J. Watson Research Center, April 2000.

[4] C.-F. Chang, S.-H. Ger, Enhancing 3D graphics on mobiledevices by image-based rendering, in: Proceedings of TheThird IEEE Pacific Rim Conference on Multimedia, December2002, pp. 1105–1111.

[5] S. Stegmaier, J. Diepstraten, M. Weiler, T. Ertl, Widening theremote visualization bottleneck, in: Proceedings of the ThirdInternational Symposium on Image and Signal Processingand Analysis, September 2003, pp. 174–179.

[6] K. Engel, O. Sommer, T. Ertl, A framework for interactivehardware accelerated remote 3D-visualziation, in:Proceedings of Joint EUROGRAPHICS-IEEE TCVG Symposiumon Visualization, May 2000, pp. 167–177.

[7] D. Beerman, Event distribution and image delivery forcluster-based remote visualization, in: Proceedings of IEEEVisualization—workshop on Parallel Visualization andGraphics, October 2003.

[8] F. Lamberti, A. Sanna, A solution for displaying medical datamodels on mobile devices, WSEAS Transactions onInformation Science and Applications 2 (2) (2005) 258–264.

[9] F. Lamberti, A. Sanna, Streaming-based solution for remotevisualization of 3D graphics on mobile devices, IEEE Trans.Visual. Comput. Graph. 13 (2) (2007) 247–260.

[10] Silicon Graphics Inc., SGI OpenGL Vizserver 3.5:Visualization and collaboration, White Paper, 2005.

[11] F. Gotz, B. Emann, T. Hampel, Using a shared white board forcooperative visualizations, in: Proceedings of HCIInternational, July 2005.

[12] V. Mea, Agents acting and moving in healthcare scenario—aparadigm for telemedical collaboration, IEEE Trans. Inf.Technol. Biomed. 5 (1) (2001) 10–13.

[13] B. Woodward, R. Istepanian, C. Richards, Design of atelemedicine system using a mobile telephone, IEEE Trans.Inf. Technol. Biomed. 5 (1) (2001) 13–15.

[14] Y.-H. Lin, I.-C. Jan, P. Ko, T.-Y. Chen, J.-M. Wong, G.-J. Jan, Awireless PDA-based physiological monitoring system forpatient transport, IEEE Trans. Inf. Technol. Biomed. 8 (4)(2004) 439–447.

[15] M. Bojovic, D. Bojic, Mobile PDR: a mobile medicalinformation system featuring update via internet, IEEETrans. Inf. Technol. Biomed. 9 (1) (2005) 1–3.

[16] J. Gallego, A. Hernandez-Solana, M. Canales, J. Lafuente, A.Valdovinos, J. Fernandez-Navajas, Performance analysis ofmultiplexed medical data transmission for mobileemergency care over the UMTS channel, IEEE Trans. Inf.Technol. Biomed. 9 (1) (2005) 13–22.

[17] J. Rodrıguez, A. Goni, A. Illarramendi, Real-timeclassification of ECGs on PDA, IEEE Trans. Inf. Technol.Biomed. 9 (1) (2005) 23–34.

[18] J. Ehret, A. Ebert, L. Schuchardt, H. Steinmetz, H. Hagen,Context-adaptive mobile visualization and information

Page 13: Mobile collaborative medical display system

s i n

260 c o m p u t e r m e t h o d s a n d p r o g r a m

management, in: Proceedings of IEEE Visualization 2004,2004.

[19] A. Sanna, C. Fornaro, IMoVis: a system for mobilevisualization of intrusion detection data, Int. J. Inf. Security12 (2) (2003) 235–249.

[20] J. Danado, E. Dias, T. Romao, N. Correia, A. Trabuco, C.Santos, J. Serpa, M. Costa, A. Camara, Mobile environmental

visualization, Cartogr. J. 42 (1) (2005) 61–68.

[21] B. Karstens, M. Kreuseler, H. Schumann, Visualization ofcomplex structures on mobile handhelds, in: Proceedings ofInternational Workshop on Mobile Computing 2003, June2003.

b i o m e d i c i n e 8 9 ( 2 0 0 8 ) 248–260

[22] S. Goose, S. Guven, X. Zhang, S. Sudarsky, N. Navab, Mobile3D visualization and interaction in an industrialenvironment, in: Proceedings of HCI International, June2003.

[23] C. Bajaj, I. Ihm, G. Koo, S. Park, Parallel ray casting of visiblehuman on distributed memory architectures, in:Proceedings of Joint EUROGRAPHICS-IEEE TCCG Symposium

on Visualization, May 1999, pp. 269–276.

[24] Vincent, A 3-D rendering library for mobile devices,http://ogl-es.sourceforge.net/index.htm, 2006.

[25] Argonne National Laboratory, MPICH2,http://www-unix.mcs.anl.gov/mpi/mpich, 2006.