12
Technology and Innovation, Vol. 14, pp. 1–100, 2012 1949-8241/12 $90.00 + .00 Printed in the USA. All rights reserved. DOI: http://dx.doi.org/10.3727/194982412X13500042169090 Copyright 2012 Cognizant Comm. Corp. E-ISSN 1949-825X www.cognizantcommunication.com MARVEL IN VIVO WIRELESS VIDEO SYSTEM A. Alqassis,* T. Ketterl,* C. Castro,* 1 R. Gitlin,* S. Ross,† Y. Sun,‡ and A. Rosemurgy§ *Department of Electrical Engineering, University of South Florida, Tampa, FL, USA †College of Medicine, University of South Florida, Tampa, FL, USA ‡Department of Computer Science and Engineering, University of South Florida, Tampa, FL, USA §Tampa General Hospital, Tampa, FL, USA This article describes the design, optimization, and prototype testing of a Miniature Anchored Robotic Videoscope for networked Expedited Laparoscopy (MARVEL), which is a camera module (CM) that features wireless communications and control and is designed to decrease the surgical-tool bottleneck experienced by surgeons in state-of-the art Laparoscopic Endoscopic Single-Site (LESS) minimally invasive abdominal surgery. Software simulation is utilized to characterize the internal human body (in vivo) wireless channel to optimize the antenna, transceiver architecture, and communication protocols between multiple CMs. A CM research platform has been realized that includes: a near-zero latency video wireless communications link; a pan/tilt camera platform, actuated by two motors, which pro- vides surgeons a full hemisphere field of view inside the abdominal cavity; a small wireless camera; an illumination control system; wireless controlled focus; digital zoom; and a wireless human– machine interface (HMI) to control the CM. An in vivo experiment on a porcine subject has been carried out to test the performance of the system and features, with the exception of recently added autofocus and digital zoom. MARVEL is a research platform for a broad range of experiments for faculty and students in the Colleges of Engineering and Medicine at USF and at Tampa General Hos- pital. Key words: Minimally invasive surgery; In vivo wireless networking; In vivo channel modeling INTRODUCTION Minimally invasive surgery (MIS) is an alterna- tive to conventional “open” surgery. MIS mini- mizes trauma and metabolic disruption to patients by minimizing incisions and points of access to patients’ body cavities. Laparo-Endoscopic Single Site (LESS) surgery is an advance in MIS for digestive disorders and is performed through the umbilicus, which provides access to the abdominal and pelvic cavities, and through multiple small incisions distant to the umbilicus through which trocars (i.e., small hollow valved tubes) are placed to provide access for operating instruments. Current “state-of-the-art” commercial video- scopes (i.e., laparoscopes, endoscopes) for MIS are Accepted June 5, 2012. 1 Current address: RFMD, Inc., Greensboro, NC, USA. Address correspondence to R. Gitlin, Department of Electrical Engineering, University of South Florida, 4202 E Fowler Ave., ENB 118, Tampa, FL 33620, USA. Tel: (813) 974 1321; Fax: (813) 974 5250; E-mail: [email protected] 1 encumbered by cabling for power, video, and a xenon light source inside a semiflexible or rigid mechanical rod. Though these videoscopes are quite good in image quality, they are cumbersome and require a point of access into the patient, either through a separate incision or through a port in a multiport access trocar. The light, video image, and power cables of the videoscope clutter and con- sume space in the operative field, which is some- times referred to by surgeons as “dueling swords.” A conventional videoscope also requires an assis- tant in the operating room to hold the scope and redirect it to maintain consistent and stable views of the ongoing operation. Some developing approaches to intracavity visualization bypass the rod–lens

MARVEL IN VIVO WIRELESS VIDEO SYSTEM

  • Upload
    lamhanh

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Technology and Innovation, Vol. 14, pp. 1–100, 2012 1949-8241/12 $90.00 + .00Printed in the USA. All rights reserved. DOI: http://dx.doi.org/10.3727/194982412X13500042169090Copyright 2012 Cognizant Comm. Corp. E-ISSN 1949-825X

www.cognizantcommunication.com

MARVEL IN VIVO WIRELESS VIDEO SYSTEM

A. Alqassis,* T. Ketterl,* C. Castro,*1 R. Gitlin,* S. Ross,† Y. Sun,‡ and A. Rosemurgy§

*Department of Electrical Engineering, University of South Florida, Tampa, FL, USA†College of Medicine, University of South Florida, Tampa, FL, USA

‡Department of Computer Science and Engineering, University of South Florida, Tampa, FL, USA§Tampa General Hospital, Tampa, FL, USA

This article describes the design, optimization, and prototype testing of a Miniature Anchored RoboticVideoscope for networked Expedited Laparoscopy (MARVEL), which is a camera module (CM) thatfeatures wireless communications and control and is designed to decrease the surgical-tool bottleneckexperienced by surgeons in state-of-the art Laparoscopic Endoscopic Single-Site (LESS) minimallyinvasive abdominal surgery. Software simulation is utilized to characterize the internal human body (invivo) wireless channel to optimize the antenna, transceiver architecture, and communication protocolsbetween multiple CMs. A CM research platform has been realized that includes: a near-zero latencyvideo wireless communications link; a pan/tilt camera platform, actuated by two motors, which pro-vides surgeons a full hemisphere field of view inside the abdominal cavity; a small wireless camera;an illumination control system; wireless controlled focus; digital zoom; and a wireless human–machine interface (HMI) to control the CM. An in vivo experiment on a porcine subject has beencarried out to test the performance of the system and features, with the exception of recently addedautofocus and digital zoom. MARVEL is a research platform for a broad range of experiments forfaculty and students in the Colleges of Engineering and Medicine at USF and at Tampa General Hos-pital.

Key words: Minimally invasive surgery; In vivo wireless networking; In vivo channel modeling

INTRODUCTION

Minimally invasive surgery (MIS) is an alterna-tive to conventional “open” surgery. MIS mini-mizes trauma and metabolic disruption to patientsby minimizing incisions and points of access topatients’ body cavities. Laparo-Endoscopic SingleSite (LESS) surgery is an advance in MIS fordigestive disorders and is performed through theumbilicus, which provides access to the abdominaland pelvic cavities, and through multiple smallincisions distant to the umbilicus through whichtrocars (i.e., small hollow valved tubes) are placedto provide access for operating instruments.

Current “state-of-the-art” commercial video-scopes (i.e., laparoscopes, endoscopes) for MIS are

Accepted June 5, 2012.1Current address: RFMD, Inc., Greensboro, NC, USA.Address correspondence to R. Gitlin, Department of Electrical Engineering, University of South Florida, 4202 E Fowler Ave., ENB 118, Tampa,FL 33620, USA. Tel: (813) 974 1321; Fax: (813) 974 5250; E-mail: [email protected]

1

encumbered by cabling for power, video, and axenon light source inside a semiflexible or rigidmechanical rod. Though these videoscopes arequite good in image quality, they are cumbersomeand require a point of access into the patient, eitherthrough a separate incision or through a port in amultiport access trocar. The light, video image, andpower cables of the videoscope clutter and con-sume space in the operative field, which is some-times referred to by surgeons as “dueling swords.”A conventional videoscope also requires an assis-tant in the operating room to hold the scope andredirect it to maintain consistent and stable viewsof the ongoing operation. Some developing approachesto intracavity visualization bypass the rod–lens

2 ALQASSIS ET AL.

approach of conventional videoscopes, but theresulting video platforms still maintain a signifi-cant spatial presence within the operating cavityand require multiple points of access (e.g., inci-sions and/or trocars).

There is significant interest in developingadvanced endoscopes for MIS (5). Research at theUniversity of California at Los Angeles (UCLA)(2) includes a surgical “polyvisiometric” camera.This device provides a multiview video platformwith simultaneous views of the surgical sites andsurrounding anatomy from multiple camera angles.It attaches to the abdominal wall by pressure armswith four cameras at 90°. Other research on endo-scopic cameras by Columbia University (6,9)includes two distinct prototypes, one with a singlecamera and another with a stereoscopic camera,allowing for the possibility of depth perception, butresulting in an increased diameter. The Columbiacameras feature a low-power, high-efficiency pan/tilt mechanism and have pinhole lenses. At theUniversity of Nebraska, Lincoln (4,10) work hasbeen done on a robotic camera system that movesthrough the abdominal cavity while tethered to asupporting cable for power and video interfacing.UNL has also performed work on cameras attachedto the abdominal wall and manipulated via mag-netic fields. Recent work at The BioRobotics Insti-tute, Scuola Superiore Sant’Anna, Pisa, Italy (11)resulted in the design of three 12-mm-diametermodular robotic units: a camera, a retractor, and amanipulator unit, and the assembly and testing ofthe camera unit.

There is great benefit in developing wirelesslynetworked systems of embedded endoscopic, andother, units that are able to communicate and trans-mit data in real time. These devices could alsoeventually be designed to communicate with otherembedded sensors and actuators to enable rapid,correct, and cost-conscious responses in chronicand emergency circumstances and are expected tobecome part of our future healthcare system, asillustrated in Figure 1. Therefore, it is essential thatthe efficiency of the internal human body (in vivo)environment as a transmission medium for radiofrequency (RF) and microwave communicationsignals be characterized as thoroughly as possible.Characterization of the in vivo channel is still in itsinfancy, but the importance of obtaining accurate

channel models is instrumental to the design ofoptimum communication systems and protocols forsuch advanced biomedical applications. In this arti-cle we describe advanced software simulation tech-niques that will provide complete in vivo channelcharacteristics to model how electromagnetic sig-nal propagate through the human body. A signifi-cant amount of work has already been done tocharacterize the performance of communicationsand channel models for on body communications;however, the study of in vivo wireless transmis-sion, from inside of the body to external transceiv-ers, is just starting to get traction. In this study, weuse ANSYS HFSS high-frequency electromagneticfields simulation software (1) and a completehuman body model with geometric accuracy downto 1 mm that completely characterizes the electricalproperties of over 300 organs, bones, and tissues.One application where the use of software tools isespecially helpful is in the design of wirelesslycapable high-definition (HD) video transmitters.Wireless HD data require more bandwidth than itsstandard definition (SD) counterpart and thereforeit is essential that the RF transmission medium isthoroughly understood to provide high-qualityvideo imaging to the surgeon.

We also demonstrate the use of a wireless com-munications and control Miniature Anchored RoboticVideoscope for Expedited Laparoscopy (MARVEL)Research Platform (3) that is self-contained, inter-nalized, and cable free to provide real-world testingcapabilities of networked devices and communica-tion protocols developed by researchers and stu-dents at the University of South Florida. The cameramodule (CM) of the MARVEL system includes: anear-zero latency video wireless communicationslink; a pan/tilt camera platform, actuated by twomotors that provides surgeons a full hemispherefield of view inside the abdominal cavity; a smallwireless camera; an illumination control system;wireless manually controlled autofocus; postpro-cessed digital zoom; and a wireless human–machineinterface (HMI) to control the CM to meet surgi-cal requirements.

This research is the first step in developing semi-autonomous wirelessly controlled and networkedlaparoscopic devices to enable a paradigm shift inminimally invasive surgery and other domains suchas Wireless Body Area Networks (7).

MARVEL 3

Figure 1. Minimally invasive surger (left), in vivo wireless networking (center), and in vivo multipath (right).

IN VIVO CHANNEL MODEL

Overview

The in vivo channel is very different from theclassic multipath communication medium (Fig. 2).Since the electromagnetic wave is passing throughdifferent media that have vastly different electricalproperties, the wave propagation speed is signifi-cantly reduced in some organs and may induce sig-nificant time dispersion (propagation delay thatvaries with frequency) that is different with eachorgan and body tissue (Fig. 2).

The simulated in vivo channel model wasderived using the ANSYS HFSS high-frequencyelectromagnetic field simulator. This software cal-culates the total RF fields in 3D that are generatedfrom arbitrarily designed and placed radiatingantennas in the simulated environment over a wide

Figure 2. Classical RF channel model (left) and in vivo channel model (right).

frequency range. ANSYS also provides a physi-cally and electrically accurate human body modelthat can be used to model the RF field interactionswith human body tissues in HFSS. The modelincludes over 300 muscles, organs, bones, andother tissues with a geometrical accuracy down to1 mm. Frequency-dependent material parameters(conductivity and permittivity) for each organ andtissue are included in the model, accurate from 20Hz to 20 GHz. Figure 3 shows the CAD drawingof the HFSS human body model used to derive thein vivo channel model; side view of a torso and across-sectional top view are shown.

Figure 4 shows the attenuation of an RF signalthat is transmitted from an antenna inside thehuman body to an external antenna. The transmit-ting antenna is located inside the abdomen of thehuman body model with a transmission path of 30

4 ALQASSIS ET AL.

Figure 3. HFSS human body model: (left) torse and external antenna; (right) top view cross cut showing internalorgans, muscles, and bones.

cm (9 cm inside of the body) to the externalantenna, which is located at the same height infront of the abdomen as the transmitter. The plotof the attenuation of a signal through free space isalso shown for comparison. It can be observed thatbesides a significant increase in attenuation, theattenuation rate is not constant with frequency, as

Figure 4. Simulated RF signal loss versus frequency through the human bodyand in free space.

is the case with free-space transmission losses.These results show that signal attenuation throughthe human body is vastly different form classicalmodels and proves the need for accurate character-ization for communication system development.

In Ketterl et al. (8), we presented the impulsechannel response (CIR), which was calculated from

MARVEL 5

the attenuation data found in the in vivo simulationexperiments. A CIR provides a mathematical descrip-tion of how a transmitted signal is affected in thetime domain as it travels through the channel. TheCIR therefore contains all the necessary informa-tion to characterize how a communication systemwill perform when operating in a certain channel.We compared the calculated CIRs of signals travel-ing through the side, front, and back of the bodyand found through these channel impulse calcula-tions (Fig. 5), and found that the dispersion throughthe body is greatest when the signal passes fromthe inside through the abdomen of the body. Thehigher dispersion is most likely due to the fact thatthe RF signal encounters more organs (stomach,intestines, bladder, etc.) as it traverses through theabdomen that present a greater amount of fre-quency-dependent variations to the signal.

These results show the importance of developinghighly accurate channel models for the humanbody as a RF transmission medium, which willallow us to refine and utilize these channel modelsso that external receivers can be optimally placed,and optimal radio transmitter and receivers cansubsequently be designed. Furthermore, the resultsobtained will be used to develop reliable communi-cation algorithms and protocols for networkedembedded MARVEL units.

Figure 5. HFSS channel impulse response for the human bodyfor different locations of the receiver.

High-Definition Video Transmission Designand Modeling

An example of how the in vivo channel modelcan aid in the development of optimized communi-cation devices is the transmission of HD videothrough the human body. Many challenges arisewhen attempting to transmit in vivo wireless HD:the output of most HD sensors is digital with datarates above 1 Gbps, which would require largebandwidths to modulate and transmit wirelessly;bit rate reduction through software compressioncould be utilized to reduce the required transmis-sion bandwidth, but at a cost of introducing unde-sired video latency; biological tissues are verydispersive to RF signals (dielectric and conductiveproperties vary with frequency), which can causesignal distortion during HD transmission.Initially,HD video transmission will convert the raw digitalHD signal to its three analog components signals(Y, Pb, Pr) and frequency multiplexed onto a singlecarrier using frequency modulation (FM). Theadvantage of this approach is that bandwidthrequirements are reasonable (<100 MHz), signalprocessing is simplified, and the complexity of thetransmitter and receiver is minimized.

The system block diagram of the proposed HDtransmission of the frequency multiplexed analogcomponent signals is shown in Figure 6. Beforeactual implementation and testing of the proposedconcept, software simulations were performedwhere the drive signals used in the simulation werecaptured HD analog outputs from a HD video cam-era with Y, Pb, Pr outputs. The analog componentswere FM modulated to carrier frequencies of 1.9,1.03, and 1.06 GHz and transmitted on a singlechannel in the simulation.

Once a channel model has been derived fromsimulations, it can be implemented into the systemdesign to model the signal behavior of the multi-plexed analog HD signal after it has been transmit-ted through the body model using the systemarchitecture shown in Figure 5. The input and out-put signals were compared in the software and itwas observed that very little distortion and reduc-tion in signal quality of the received signaloccurred after transmission through the humanbody in simulation. Figure 7 shows a portion of theanalog Y component comparing the transmitted

6 ALQASSIS ET AL.

Figure 6. Analog HD video transmitter and receiver architecture.

and the reconstructed received signals. Very littlelatency (�0.1 µs) can also be seen in the plot.

MARVEL CONCEPT AND DESIGN

MARVEL System Overview

The MARVEL System consists of a wirelessHMI and a wireless laparoscopic CM attached tothe abdominal wall as shown in Figure 8. An inser-tion/removal tool (IRT) (not shown) will be used

Figure 7. Comparison of the input Y component at the transmitter (Tx) and recon-structed output form simulations performed in ANSYS Designer.

to i) insert the CM through a trocar port, ii) attachthe MARVEL CM to the abdominal wall, and iii)remove the CM on completion of the operation.The IRT will i) supply power to the CM duringinsertion and removal, ii) align the CM so the illu-mination and video imaging subsystems enter thetrocar port first, providing video imaging duringthe insertion process, iii) secure the needle withinthe IRT to preclude inadvertent needle contact dur-ing insertion and removal, iv) rotate the CM to a

MARVEL 7

Figure 8. MARVEL camera module after attachment to abdominal wall.

position perpendicular to the abdominal wall, andv) push the needle through the abdominal wallholding the CM in place while the attachment mod-ule (AM) secures the CM in place. Power is sup-plied to the CM by the AM, allowing uninterruptedCM operation.

The surgeon interacts with the system through astandard joystick and a Labview application run-ning on a standard computer. The joystick controlsdirection of motion of the CM through the avail-able hemisphere of motion, and the velocity of panand tilt motion. Every command from the joystickis interpreted to specific control instructions thatare pushed out into a datagram. The instructionsare then relayed to a 900-MHz band ISM digitaltransceiver where they are wirelessly transmitted tothe MARVEL (CM). The wireless datagram is pro-cessed by the embedded microcontroller (MCU) inthe CM, and divided into pan, tilt, light intensity,and image sensor control commands. For everycommand, the MCU drives control signals for allof the internal modules. The wireless video fromMARVEL is continuously transmitted through ananalog wireless interface to the wireless HMI anddisplayed with near-zero latency on high-resolution

monitors. A block diagram of the MARVEL con-trol system is shown in Figure 9.

MARVEL has been designed to address the fol-lowing functional requirements:

10 × 42-mm camera housing platformWireless actuator controlWireless illumination controlEnhanced view inside abdominal cavityNeedle power and anchor subsystemWireless and cable-free videoscopeWireless HMI1080 p high-definition video, 30 fps, near-zero

(1 ms) latency displayed video

In order to design a reusable modular researchplatform where new functionality can be added andtested efficiently, a 3× size model was designed,taking care to ensure that the electronics could fitin a 1× (�10 mm) commercialized CM. The 3×MARVEL CM is 105 mm long with a diameter of30 mm.

A MARVEL CM, as illustrated in Figure 10, iscomposed of five subsystems: the illumination sub-system provides light inside the abdominal cavity,

8 ALQASSIS ET AL.

Figure 9. Block diagram of the MARVEL control system.

the vision subsystem provides optimal focus rangethrough the use of a fixed lens with a remote manu-ally controlled auto-focus lens and video reso-lution, the wireless communication subsystemhandles the control commands and video betweenthe device and the HMI, the embedded control sub-system handles the control decision making for thedevice, and the attachment needle power subsystemsecures the CM to the abdominal wall and powersthe CM.

The vision subsystem shown in Figure 11includes the lens, the lens holder, and the videoimage. Additionally, the system is optimized witha manual auto-focus feature using a voltage con-trolled variable lens. This liquid lens uses a unique,patented technology to transform a simple liquidcrystal (LC) cell into a variable focus lens (9). Thisdevice is a replacement for mechanically movinglens solutions. Lens focus is controlled by applyinga small control voltage to dynamically change the

Figure 10. MARVEL: CAD model of the camera module.

refractive index of the material that the light passesthrough. The block diagram of the vision subsys-tem control feature is shown in Figure 12.

The CM vision subsystem also has a digitalzoom capability. Zoom is provided through soft-ware that postprocesses the transmitted image byexpanding the pixel area of view of the originalimage. This feature will also be remotely con-trolled by the surgeon using the CM control joy-stick. Since standard definition quality video isbeing used in these initial tests, the zooming rangewill be limited due to losses in picture quality athigher zooms. Therefore, it is critical that high-def-inition video be implemented in future CM devices.

Interconnection and WirelessCommunication Subsystem

The MARVEL CM is comprised of five PCBsin a modular design (Fig. 13). Each PCB has twoconnectors that allow for easy integration of newand/or additional designs into the system. Theinterconnection system provides two digital controlbuses throughout the PCB stack.

MARVEL uses two different links for wirelesscommunications. The first link is for control a sec-ond wireless transmitter that provides an analoglink between the video image sensor and an exter-nal receiver with near-zero latency. Since the PALvideo signal requires less than 8 MHz of band-width, a frequency modulated (FM) transmitter wasdesigned with a carrier frequency in the 1.1 to 1.3GHz range and an output power of 10 dBm. Thetransmit frequency is easily tuned to any pointwithin the chosen frequency range. This abilityallows the use of more than one MARVEL CMduring a surgical procedure, whereby each CMtransmits at a different frequency with enough bandseparation to avoid interference. A detaileddescription of the MARVEL research platform canbe found in (3).

EXPERIMENT

An in vivo experiment was carried out in theUSF Vivarium, and illumination, image quality,wireless communication, CM sealing, and wirelesscontrol of the CM was tested. In this initial test,manual auto-focus and digital zoom were notimplemented but will be tested in the future. Two

MARVEL 9

Figure 11. Vision Subsystem CAD model.

CMs (Figure 14) were inserted into a porcine sub-ject and simultaneously tested. A “balloon,” shownon the left side, was used to seal the space betweenthe cylinders.

Two CMs were controlled simultaneously by thesurgeons and real-time images of internal organswere transmitted and recorded (Fig. 15). The sur-geons intuitively adapted to the positions of eachCM and effortlessly switched from one CM to theother, demonstrating that a physical alignment ofeach CM to the viewing perspective of the surgeon,at least in this case, was unnecessary. Future workwith multiple CMs in vivo will include a position-ing and alignment procedure to enable pixel edge

Figure 12. Block diagram of the vision subsystem control.

matching of multiple images creating a panoramicvideo display view of the surgical site.

Normal physiologic motion of the porcine sub-ject did not affect image stability. Future work willintegrate motion detection and image stabilizationto compensate for patient motion caused by thesurgical team.

CONCLUSION

It has been demonstrated that optimization ofnetworked robotic endoscopic systems requires abroad range of methodologies that when integratedtogether have the potential to significantly advance

10 ALQASSIS ET AL.

Figure 13. Top: CAD PCB description. Bottom left: MARVEL CM PCB research platform electronics stack.Bottom right: Assembled tilt cylinder.

the field of minimally invasive surgery. Themethodologies we demonstrated include softwaresimulation, research platform development, andreal-world testing.

Our knowledge of how wireless signals interactwith the human body when traveling internally ortransmitted from the inside to the outside of thebody is still very limited. If reliable communicationand data transfer between imbedded medicaldevices, such as camera units, sensor, and actuatorsis to be achieved, accurate in vivo channel modelswill need to be developed. Channel modelingthrough software simulations is one way of calcu-lating highly accurate channel models; much more

accurate than using physical models that do notprovide frequency dependent electrical properties.

With the use of the ANSYS HFSS simulationtool and its complete human body model, weshowed the simulated dispersion characteristics asa signal travels from an internal transmitter to anexternal receiver. We also presented how this soft-ware tool can be used to simulate the behavior ofhigh-speed signals, such as analog HD video, andcan be used in other RF system simulators to fur-ther optimize the transceiver architecture. Theseresults show the importance of developing highlyaccurate channel models for the human body as aRF transmission medium, which will allow us to

MARVEL 11

Figure 14. Three MARVEL CMs with their respective attach-ment modules and power cables.

refine and utilize these channel models so thatexternal receivers can be optimally placed, andoptimal radio transmitter and receivers can be sub-sequently designed.

The research platform we developed, a wire-less robotic laparoscopic imaging system (MAR-VEL), has been implemented with the goal ofadvancing minimally invasive surgery. The com-plex system design of the MARVEL platform has

Figure 15. Two MARVEL camera modules are shown (left). Image of porcine intestines taken by a MAR-VEL CM (right). The surgeons have independent control of each CM.

involved research in the areas of robotics, wirelesscommunications, heat transfer, image processing,illumination engineering, and embedded systemswith the goal of implementing a broad range offunctionality required for laparoscopic surgery(remote motion control, wireless video, and illumi-nation) in a single device.

To demonstrate the real-world capabilities of theresearch platform, two MARVEL CMs were suc-cessfully tested simultaneously in vivo in a porcinesubject by a team of surgeons. The two CMs wereinserted into the abdominal cavity of the porcinetest animal, demonstrating individual control ofeach CM while simultaneously transmitting goodquality images from each CM. Future work toimprove the capabilities of the MARVEL moduleswill include wireless HD image transmission(again, with near-zero latency), autofocus, zoom,and image stabilization.

Furthermore, the robotic system described in thisarticle is a proof of concept research platformdesigned to support a broad range of experimentsin a range of domains for faculty and students inthe Colleges of Engineering and Medicine and atTampa General Hospital. This research is the firststep in developing semiautonomous wirelessly con-trolled and networked laparoscopic devices toenable a paradigm shift in minimally invasive sur-gery and other healthcare domains.

12 ALQASSIS ET AL.

ACKNOWLEDGMENTS: A color version of theimages can be found at http://iwinlab.eng.usf.edu.The authors declare no conflict of interest.

REFERENCES

1. ANSYS. ANSYS HFSS. Retrieved December 2011, fromhttp://www.ansoft.com/products/hf/hfss/

2. Berkelman, P.; Cinquin, P.; Troccaz, J.; Ayoubi, J.; Let-oublon, C.; Bouchard, F. A compact, compliant laparos-copic endoscope manipulator. In: Proceedings of IEEEInternational Conference on Robotics and Automation.2002:1870–1875.

3. Castro, C. A.; Smith, S.; Alqassis, A.; Ketterl, T.; Sun,Y.; Ross, S.; Rosemurgy, A.; Gitlin, R. D. MARVEL: Awireless miniature anchored robotic videoscope for expe-dited laparoscopy. Proceedings of IEEE InternationalConference on Robotics and Automation; 2012.

4. Hawks, J.; Rentschler, M.; Redden, L.; Infanger, R.;Dumpert, J.; Farritor, S.; Oleynikov, D.; Platt, S. Towardsan in vivo wireless mobile robot for surgical assistance.Stud. Health Technol. Inform. 132:153–158; 2008.

5. Hu, T.; Allen, P. K.; Hogle, N. J.; Fowler, D. L. Insertablesurgical imaging device with pan, tilt, zoom and lighting.In: International Conference on Robotics and Automation(ICRA), May 19–23; 2008.

6. Hu, T.; Allen, P. K.; Nadkarni, T.; Hogle, N. J.; Fowler,D. L. Insertable stereoscopic 3D surgical imaging devicewith pan and tilt. In: Proceedings of IEEE/RAS-EMBSInternational Conference on Biomedical Robotics andBiomechatronics (BIOROB), Scottsdale, AZ, USA;2008:311–316.

7. IEEE P802.15 Working Group for Wireless Personal AreaNetworks (WPANs). Channel model for body area net-work (BAN). Retrieved November 2011, from http://www.ieee802.org/15/

8. Ketterl, T. P.; Arrobo, G. A.; Sahin, A.; Tillman, T. J.;Arslan, H.; Gitlin, R. D. In vivo wireless communicationchannels. IEEE Wireless and Microwave TechnologyConference WAMICON; 2012.

9. Platt, S. R. Vision and task assistance using modular wire-less in vivo surgical robots. IEEE Trans. Biomed. Eng.56(6):1700–1710; 2009.

10. Tortora, G.; Dimitracopoulos, A.; Valdastri, P.; Menciassi,A.; Dario, P. Design of miniature modular in vivo robotsfor dedicated tasks in minimally invasive surgery. In:IEEE/ASME International Conference on Advanced Intel-ligent Mechatronics; 2011.

11. Wang, J. Radiation characteristics of ingestible wirelessdevices in human intestine following radio frequencyexposure at 430, 800, 1200, and 2400 MHz. IEEE Trans.Antenn. Propag. 57(8):2418–2428; 2009.