Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
FUSIMO Project, 270186, 2011‐12‐30 Page 1 of 29
D2.1Draft–Architectureforhardware,software,anddata
Authors: Christian Askeland, Jennifer Bieberstein, Thomas Langø,
Frank Lindseth, Michael Schwenke
Status: Released
Date: 2011‐12‐30
Project co‐funded by the European Commission Service under the 7th Framework Programme
Dissemination Level
PU Public
PP Restricted to other programme participants (including the Commission Services)
RE Restricted to a group specified by the consortium (including the Commission Services) X
CO Confidential, only for members of the consortium (including the Commission Services)
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 2 of 29
TableofContents
Version history ................................................................................................................ 3
Overview and scope of this document ............................................................................ 3
Definitions....................................................................................................................... 4
Diagrams ................................................................................................................................. 4
Terminology ............................................................................................................................ 4
Architecture .................................................................................................................... 5
External Hardware connection ................................................................................................. 7
External Module connection .................................................................................................... 8
Hardware modules .................................................................................................................. 8
Data Processors ....................................................................................................................... 9
Use Cases ...................................................................................................................... 10
UC1 Dynamic sonication ........................................................................................................ 10
UC2 Planning of intervention using simulation ....................................................................... 12
UC3 Augmentation of MR thermometry with simulations ...................................................... 12
UC4 Monitoring of intervention progress ............................................................................... 13
Use Case Generic Tasks .......................................................................................................... 13
Sequence Diagrams ....................................................................................................... 17
Data Specification ......................................................................................................... 22
Volunteer data ‐ Generic motion model ................................................................................. 22
Cadaver data ......................................................................................................................... 23
Interface Specification ................................................................................................... 24
Motion Model ....................................................................................................................... 24
FUS Modeling ........................................................................................................................ 26
Functional Model (first draft sketch) ...................................................................................... 28
References .................................................................................................................... 29
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 3 of 29
VersionhistoryVersion Who When What
0.01 Christian Askeland 2011‐06‐10 Initial version based on WP2 Workshop,
Bremen, 2011.05.25.
0.02 Christian Askeland,
Michael Schwenke
2011‐06‐21 Updates based on mail correspondence
0.03 Christian Askeland 2011‐06‐22 Sequence diagrams for UC1‐3, merged in
Michael’s use cases.
0.10 Christian Askeland 2011‐06‐29 Final version for MS2.
0.20 Thomas Langø, Frank
Lindseth
2011‐12‐08 Changed previous version and integrated data
specification part.
0.30 FME, SNF 2011‐12‐30 Final version as D2.1 (M12)
Overviewandscopeofthisdocument
This document describes the architecture for hardware, software and data for the FUSIMO
Demonstrator, and is an updated version of the MS2 document “Architecture and software
interfaces”.
It gives the project participants a common ground, and will serve as a basis for the first
implementation. All software components developed in the various work packages will
communicate using the interfaces described here.
The work is part of the FUSIMO WP2, task 2.4, as described in Annex I. This version answers
Deliverable 2.1, December 2011: Draft ‐ Architecture for hardware, software, and data. It will be
used as input particularly for the deliverable D2.3 Architecture for hardware, software and data
(MS4, M24).
The architecture has been developed from the clinical use cases and application scenarios. These
were transformed into software use cases during the FUSIMO workshop in Bremen in May 2011.
These use cases are described in this document, and sequence diagrams have been developed for
a selection of them. The sequence diagrams are used to detect software modules and
relationships between them. Based on this work, the architecture has been designed.
The document starts with the architecture and an overview of the hardware and software
modules. Then the use cases and sequence diagrams are described, a data specification is
provided, and last the software module interfaces.
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 4 of 29
Definitions
Diagrams
Figure 1. Diagram definitions.
The diagrams used in this document uses an informal structure based on UML. They are intended
to visualize the general ideas described in the text, not to be a formal definition. Figure 1 describes
the meaning of the arrows used. The arrows show the main pathways, but not all paths are shown
in order to avoid cluttering.
TerminologyThe table defines terminology used in the document. Some of the concepts will be described in
more detail later.
Name Description
FUS Focused Ultrasound Surgery.
Patient Either a real patient to be treated by FUS (which is out of the scope
of FUSIMO), a volunteer (without a malignant structure), or a Thiel
embalmed cadaver.
MeVisLab The chosen platform for software integration.
www.mevislab.de
MeVisLab Module A piece of software compatible with/contained in MeVisLab.
External Module A software module provided externally (i.e. outside MeVisLab).
DataProcessor A MeVisLab module acting as a wrapper around an External Module.
The module handles all interfacing details, such as language or
machine boundaries.
Controller A MeVisLab module able to perform work on top of DataProcessors
Data flow
Command
dependency
inheritance
aggregation
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 5 of 29
and Hardware. A network of controllers manages the flow of the
FUSIMO demonstrator.
Hardware Interface A MeVisLab module that connects to either a hardware component
(UsProbe, MRI machine etc.) or that is a dummy interface to such
hardware.
PaSM The Patient Specific Model, providing persistent storage for all data.
In MeVisLab, this is implemented within the MAF framework.
MAF MeVis Application Frame
Simulation mode A demonstrator mode where the hardware devices are simulated
instead of connected to physical devices.
Sonication The actual process of introducing US‐energy into the body. The
entire intervention consists of many sonications, each ablating a
subvolume of the specified target volume.
Target Position A position in the static (time‐invariant) model that is scheduled for
sonication. A derivative of this is the Motion‐Corrected (time‐
variant) target position.
Target volume Entire volume to be ablated, i.e. the tumor with safety margins. The
target volume will be ablated in a sequential manner. Each
sonication ablates a subvolume of the target volume, this is called a
Sonication Volume.
Sonication Volume The volume being ablated when performing sonication at a given
Target Position. The union of the Sonication Volumes make up the
Target Volume. The Sonication Volume is specified by a FUS‐probe
specific ellipsoid oriented based on the probe position, or a more
precise definition in the future.
ArchitectureFigure 2 shows the major components of the Demonstrator. The system is driven by a cluster of
Controllers, all hardware are represented as Hardware interfaces, while algorithmic processing is
delegated to DataProcessors. All data that is used by the system is stored in the PaSM. In the top
layer, GUI and Visualization provide user interaction. A central idea is that the PaSM stores all
data. This way, the Visualization modules need only to connect to the PaSM, and visualize
whatever needs to be visualized at a particular time.
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 6 of 29
Figure 2. Major components of demonstrator.
ComponentDependencies:
Figure 3. Dependency diagram for all modules.
Figure 3 shows how each module depends on the others. In this figure, an arrow indicates that one
module knows the one it points to. For example, Controller knows PaSM, but PaSM does not know
Controller. The dependencies are transitive (i.e. GUI knows Data).
Data (green): Data is a collection of interfaces that is shared across the system. Examples are a
MR Volume or a RealTime Video Stream. They are used as a way to share information.
HW (yellow): Hardware modules are passive. They control the hardware they represent, but
do not initiate contact with other parts of the system. They communicate with the rest of the
HW Data Proces
PaSM
ControllerGUI
Data
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 7 of 29
system through Data interfaces. Exception: Simulated HW may access Data and Data
Processors in order to perform the simulation.
PaSM: (green): All data objects are stored with the PaSM. It does not itself control any other
parts of the system, but acts as a central repository.
Data Processors (orange): Data Processors only have access to Data, given to them by a
Controller. DataProcessors can also be allowed to access PaSM directly.
Controller (dark blue): Has access to: HW, PaSM, DataProcessors. Responsible for setup so
that all modules are able to perform their task with the given constraints. All actions are
initiated by a Controller, and data is distributed from the PaSM to the other modules as
needed. In the diagrams, the blue control arrow from the controller is only shown when
deemed necessary.
GUI (light blue): Has access to everything. The GUI acts as an interface to let the user
configure the controller of a use case, e.g. the user defines parameters for a therapy plan such
as the transducer type as input to the controller responsible for the simulation of a therapy
plan. Otherwise it is passive. The Controller sends data to the GUI that is to be visualized. This
is the reason for the link/arrow from the controller to the GUI in Figure 3.
ExternalHardwareconnection
Figure 4. Implementation of a hardware device as either a physical device or a simulation of the device.
Figure 4 shows (using the FUS probe as an example) how each of the hardware interfaces is
implemented. The figure uses UML arrow notation. There are two distinct implementations that
both inherit from the same interface, thus hiding the implementation from the rest of the system.
The two implementations are:
Physical: This is a connection to the real hardware device, and will contain adapter code
that interfaces to the driver, and creates a representation of the device on the local
system. This connection may be established using further mediating software, for example
the ExAblate software developed by FUSIMO.
Simulated: This provides a software model of the corresponding hardware that emulating
the functionality of the hardware. This may be achieved using recorded data or
simulations.
A discussion of each device can be found in section Hardware modules.
FUS Probe
Physical Simulated Hardware driver
Simulation modules
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 8 of 29
ExternalModuleconnection
Figure 5. Encapsulation of an external module inside a DataProcessor.
Figure 5 shows (using the Motion Modelling as an example) how data processing modules are
integrated into the system. The external Module (shown in red) is created outside of MeVisLab
and can be anything, for example a process accessed using TCP/IP, a MatLab program, or a C++
library written by another (Non‐MeVis) entity. The task of the MeVisLab module (shown in orange)
is to hide the connection details and provide an interface suitable for integration in the
Demonstrator. The MeVisLab module is a data processor in this regard.
HardwaremodulesThis is a brief description of the hardware devices available to the Demonstrator.
Name Description
Preoperative Imaging This represents image data acquired outside the Demonstrator. The
module represents an interface to the file data, which is able to load
them into the system in a format known to the PaSM.
Simulation: No simulation needed, as the data is generated externally.
MRI Imaging The 4DMRI and thermometry scanner integrated in the ExAblate system.
It is used to intraoperatively monitor tissue ablation.
Simulation: 4DMRI is not available during simulation. Use Preoperative
imaging to simulate thermal map or use real thermometry data.
Thermometry could be simulated by generating a thermal map during
simulated sonication as shown in Figure 10.
US Imaging A 2D/3D/3D‐biplane/4D US imaging probe used to acquire real‐time
image data from the patient prior to and during sonication. The probe
could potentially be held by the robot arm. The absolute position, which
is dependent on the Robot position, is available through the interface.
Simulation: The images will be simulated by playing a video or a timed
image sequence of a previously recorded imaging session.
Robot The INNOMOTION robot could be used to position the US Imaging probe
and the FUS probe relative to the patient. Robot control is considered a
manual task outside the scope of the Demonstrator. The position is
Motion Modeling Component
Standalone Motion Modeling Software
Module
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 9 of 29
available through the interface.
Simulation: The robot simulation stores a dummy position. A user
interface for setting the position will be available.
FUS Probe The ExAblate FUS Probe used to perform sonication. The probe is
attached to the Robot. The absolute position, which is dependent on the
Robot position, is available through the interface. Setting of beam
parameters is also available.
Simulation: The simulated FUS probe will have an interface for setting
parameters like element phases and power and will provide an interface
to access these parameters. The FUS modeling Data Processor, Figure 10,
uses these parameters for simulation.
DataProcessors
Name Description
Structure Extraction Segmentation algorithms for extracting organ/vessel/tumor
geometries. This is a toolbox available to the user and controllers.
Structures that lie in or near the beam path will need segmentation,
such as the ribs, major vessels, and critical organs. Existing
algorithms from the MeVisLab platform will be used. Provided by WP
6.
Motion Model Provides a Time‐varying model of the organ geometry, based on an
internal generic patient model, preoperative MRI data from the
patient and intraoperative US data. Provided by WP3.
Therapy Planning Handles the target volume and the sonication volumes. Generates a
plan (sequence of sonication volumes) based on a target volume and
possible entry points. Provided by WP6.
FUS Modelling Simulations of acoustic beam, heat transfer and thermal distribution
in heterogeneous volume are basic building blocks for FUS treatment
prediction. Provided by WP4.
Functional Organ
Modelling
Modelling of the effects of heating in the body. Modelling of
perfusion of the target organ. Provided by WP5.
Therapy prediction and
monitoring
Simulation of an entire therapy. The processor uses the plan
generated by Therapy Planning as a starting point, and then uses FUS
Modelling and Functional Organ Modelling to predict effects.
Provided by WP6.
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 10 of 29
Computational service
The FUSIMO partners develop a library for simulation of high intensity focused ultrasound
interventions. The library is written in C++ and contains the core application logic. Around this
library, accessor modules will be implemented.
First of all, the library is made accessible to MeVisLab as the primary development platform for the
FUSIMO demonstrator application. Furthermore, there are several approaches that can be
followed for a computational service in FUSIMO. Performance tests will be conducted to decide
which approach is most suitable for the FUSIMO setting.
Thin client architecture
At the clinical site a thin application is used for visualization and GUI interactions. This thin
application calls computational methods of a computational server. Volume rendering, in this
scenario, will be done on the server, as it otherwise would require large data transports. This
approach is most suitable for large distances between client and server, for example if
communication is done over the internet.
Server based computing
In this setting, the application at the clinical site covers a larger part of the application logic. For
example the planning of the intervention and volume rendering may be done in this application.
The server does the computation of the simulation results. Thus, there is more large image data
transfer as in the thin client architecture and this architecture is more suited for server and client
residing in the same local network.
Web service
The FUSIMO C++ library could furthermore be made accessible as a web service using available
web service frameworks if there are use cases for such a service.
Architecture and hardware
A high‐performance computer located at Fraunhofer MEVIS will be used for computations.
UseCasesDescriptions of all use cases, broken down into tasks. The use cases serve as a starting point for
designing the system.
The uses cases are presented first. They are described using finer granular tasks. A complete list of
tasks is given at the end of the chapter grouped by their purpose.
UC1DynamicsonicationAbout: Allow for tumor movement during a sonication.
There are two different systems to be taken into account:
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 11 of 29
First system is an ideal system that is expected to be available in near future. The ideal system is expected to allow for real time control of the ultrasound probe. This system will probably be supported in simulation mode only, due to the availability of the probe, but is still considered in the design because it will correspond to the system FUSIMO aims to create. Second system is the actual hardware system that is given in FUSIMO. It does not allow for real‐time communications with the ultrasound probe and the MR scanner. All settings like target position need to be set in advance of a sonication. Furthermore, the robot carrying the ultrasound probe does not allow for fast movements. Prerequisite tasks (for both following use cases): ‐ T: Initialize system, ‐ Patient lies in MRI, ‐ T: Adaption of generic motion model to patient, ‐ T: Register motion model to current patient positioning, ‐ T: Set ultrasound probe position ‐ T: Set target area/position for a sonication
UC1a:Real‐timetargetadaptationRemark: This use case will only be available in simulation mode due to hardware limitations. Further prerequisite tasks: ‐ T: Acquire planning data Tasks: ‐ T: Start sonication, ‐ Repeat until end of sonication (time step as a parameter):
o T: Obtain current respiratory state from US stream
o T: Real‐time adaption of patient specific motion model to patient’s respiratory status
o T: Move target position one time step forwards based on a motion field
o T: Calculate transducer settings for a given probe and target position
o Set probe settings
o T: Apply ultrasound
o T: Stop sonication if area is ablated
UC1b:PresettargetedsonicationFurther prerequisite tasks: ‐ T: Set maximal sonication duration Tasks: ‐ T: Record regular breathing motion pattern
‐ T: Calculate sequence of target positions based on breathing motion pattern
‐ For each target position
o T: Calculate transducer settings for a given probe and target position
o Save settings
‐ T: Push session settings to ultrasound probe
The following steps will be performed by the Insightec ExAblate system, possibly by
communication with the demonstrator system.
‐ T: Start sonication,
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 12 of 29
‐ Repeat until end of sonication:
o T: Obtain current respiratory state, e.g. from US stream
o T: Compare current breathing state with planned breathing state, abort if mismatch is
too large
o T: Apply ultrasound
o T: Request thermal information
o T: Update outcome estimate
o T: Stop sonication if area is ablated
UC2PlanningofinterventionusingsimulationAbout: Using the system, a physician is able to plan a focused ultrasound surgery. The user can
locate the target area of ablation and several probe positions. The system potentially gives aid to
develop a treatment plan, i.e. a sequence of sonications with their parameters. The system
simulates the procedure to supply feedback about the expected outcome to the physician.
Prerequisite tasks: ‐ T: Acquire planning data ‐ T: Adaption of generic motion model to patient, ‐ T: possibly record regular breathing motion pattern
‐ T: (Automatic segmentation of required structures if possible)
‐ T: Initialize system,
‐ T: Load patient data set
Tasks: ‐ T: Segment lesion ‐ T: (Manual or semi‐automatic) Segmentation of required structures
‐ T: Interactively set probe positioning
‐ Potentially:
o Iteratively
T: Automatically propose treatment plan(s)
T: Manually adjust treatment plans with visual feedback of outcome estimate
o T: selection of one plan out of some possible plans ‐ Simulate the procedure for each planned sonication
UC3AugmentationofMRthermometrywithsimulationsAbout: Monitor the effects of focused ultrasound surgery on the body. Monitor not only the target
region but also surrounding risk structures. The information given by sparse MR thermometry
measures are incorporated into the models describing the effects on the entire area of interest.
To demonstrate the feasibility of this feature, this can also be done retrospectively using
previously acquired MR thermometry data.
Prerequisite tasks: ‐ T: Start sonication
Tasks:
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 13 of 29
‐ T: Request thermal information ‐ T: Adaption of models to patient status (e.g. adjust US absorption parameters)
‐ T: Simulate
‐ T: Merge simulation results with MR thermometry, visualization
UC4MonitoringofinterventionprogressIt has to be decided in the clinical requirements process, whether this use case shall be a focus of
the project. If so, detailed planning of the required tasks will be performed.
About: Compare the actual progress of the intervention to the progress expected at that time
point. The expected progress is to be calculated by simulations based on the treatment plan
developed in UC2.
Progress in special could be taken to be the ablation area at the current time point.
Tasks: ‐ T: Visualize deviations between plan and current state
UseCaseGenericTasksThe following is a list of tasks that are used by the use cases described above.
Generaltasks
T:Initializesystem‐ start system
‐ connect modules for either online mode (connected to hardware) or demonstrator mode
(emulation of all external hardware/effects in software)
‐ select a patient
o create a new patient dataset
o load a patient data set
Datahandlingtasks
T:Recordregularbreathingpattern
T:Acquireplanningdata‐ anatomical image data
‐ perfusion data
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 14 of 29
Imageprocessingtasks
T:(Automaticsegmentationofrequiredstructuresifpossible)
T:Computeparametermapsforsimulations
Visualizationtasks
T:Visualizedeviationsbetweenplanandcurrentstate‐ Visualize deviations from planned progress. This could be realized as a marking of under‐ and
overablated (meaning not expected to be ablated but ablated) regions.
T:VisualizesimulationresultsandMRthermometry
Ultrasoundprobetasks
T:Startsonication
T:Endsonication
T:Pushsessionsettingstoultrasoundprobe‐ a session allows setting of 32 target points and an arbitrary sequence of these points over time
Modedependingtasks
T:Applyultrasound‐ online mode
o apply ultrasound with current settings to patient
‐ demonstrator mode
o T: simulate ultrasound field in real time
T:Requestthermalinformation‐ online mode
o get MR thermometry data
‐ demonstrator mode
o T: simulate thermal field in real time
o Emulate MR thermometry by giving only information on some image slices
Motionmodeltasks
T:Adaptionofgenericmotionmodeltopatient
T:Registermotionmodeltocurrentpatientlocation/orientationonthetable
T:ObtaincurrentrespiratorystatefromUSstream
T:Real‐timeadaptionofpatientspecificmotionmodeltopatient’srespiratorystatus- possible outputs:
o entire vector field of motion o tumor motion
- done internally: o get stream of US (tracking) data (feature positions) from PSM
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 15 of 29
o best just to get a handle and the stream goes directly from US tracking to patient specific motion model
T:Movetargetarea/pointonetimestepforwardsbasedonamotionfield
T:Calculatesequenceoflandmarkpositionsbasedonbreathingmotionpattern
Simulationtasks
T:Calculatetransducersettingsforagivenprobeandfocusposition
T:Updateoutcomeestimate- in simplest version this means computing thermal dose measure based on the current and
history thermometry data
T:Mostaccuratesimulationofatreatmentplan- may take time
T:Simulate‐ T: simulate ultrasound field
‐ T: simulate thermal field
T:Veryfast(instantaneous)simulations‐ Simulate with focus on time, possibly with less accuracy.
T:Adaptionofmodelstopatientstatus(e.g.adjustUSabsorptionparameters)‐ Adapt based on sparse information, e.g. only some slices of thermometry data
T:MergesimulationresultswithMRthermometry,visualization‐ (If models are fitted well to the sparse data this step will not be needed because
simulations will be equal (in some sense) to the given sparse data.)
Decisiontasks
T:Stopsonicationifareaisablated
T:Comparecurrentbreathingstatewithplannedbreathingstate,abortifmismatchistoolarge
Planningtasks
T:Automaticallyproposetreatmentplans‐ compute a proposal plan of intervention automatically ‐ results
o probe positions / orientation o transducer settings o sequence of sonications
‐ [ possible automatic optimization ]
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 16 of 29
Interactiontasks
T:Loadpatientdataset
T:Setultrasoundprobeposition‐ This task is solely the setting of the position not an estimation if the position allows for a
successful sonication
T:Decideformultiplepossibleprobepositions‐ Interactively set the positions while the system gives feedback if this position allows for
successful sonication
T:Settargetarea/positionforasonication
T:Setenergylevel
T:Setmaximalsonicationduration
T:Segmentationofrequiredstructures‐ Skin line ‐ ribs ‐ bowel/lung ‐ bile ducts ‐ gull stones ‐ major blood vessels ‐ mediastinum ‐ stomach ‐ colon ‐ spine ‐ diaphragm ‐ pancreas ‐ kidney ‐ either automatically or semi automatically
T:Segmentlesion
T:Manuallyadjusttreatmentplanswithvisualfeedbackofoutcomeestimate‐ adjust positions, sequence etc..
‐ uses T: Very fast (instantaneous) simulations
T:selectionofoneplanoutofsomepossibleplans‐ based on some criteria
‐ could use T: Most accurate simulation of a treatment plan
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 17 of 29
SequenceDiagrams
UC1a:Real‐timetargetedsonicationThis is sonication using a theoretical probe that can change target position in real time. The use
case can be summarized as follows:
1. Initialize system. Create objects and setup basic connections.
2. Import preoperative data.
3. Load data, connect real‐time US and adapt the MotionModel to the Patient.
4. Plan the sonication by positioning robot and define target position.
5. Continuously until sonication volume is ablated:
5.1. Update the MotionModel according to real‐time data.
5.2. Find the motion‐corrected target position.
5.3. Apply FUS to the target position.
This also includes all initialization of the Demonstrator.
The diagrams below describe the object interactions during the various steps.
Figure 6. Load data step.
The action items of Figure 6 can be described as follows:
Preoperative Imaging
MRI Imaging
US Imaging
Robot FUS Probe
Motion Modeling
Structure Extraction
HW
PaS
Demonstrator Controller
3) Get US Imaging interface, store in PaSM
4) Insert interfaces in
VideoSource
MR
1) Load registered MR volume, store in PaSM
5) Adapt
Mesh
2) Extract segmented structures, store in PaSM
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 18 of 29
Either preoperative data is loaded (Figure 6, step 1), or intraoperative MRI is acquired. The
resulting data is stored in the PaSM. Then the organs of interest are segmented (2)
(manually or automatically), and stored in the PaSM.
The US Imaging source is connected to the Motion Modeling component. US images are
inserted into the PaSM (Figure 6, step 3), and also sent to MotionModeling (4), where they
are made available to the underlying Module.
All data of interest is fed into the Motion Model (4), which is asked to adapt to the data
(5). This will create a Patient Specific Model from Generic and specific data. This model can
be used to predict movement.
Figure 7. The planning step.
The action items of Figure 7 can be described as follows:
The robot is moved manually to a suitable position (1).
The user selects a target position or a target volume. This is handled by the Therapy
Planning Component, which stores its data as a Target Volume in the PaSM (2).
Robot
Demonstrator Controller
1) Set robot position
Therapy Planning
Target Volume
2) Define target volume/position, store in PaSM
GUI
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 19 of 29
Figure 8. The sonication step of UC1a.
Figure 9. Check for ablation completed.
The action items of Figure 8 and Figure 9 can be described as follows:
Get a target position from the Therapy Planning Component (8:1).
Correct the position for movement (8:2), then get probe parameters for that position (8:3)
and send them to the FUS Probe (8:4), which in turn applies HIFU to the tissue.
Monitor the process by requesting thermometry data from the MRI (9:1). Visualize it (9:2),
and also use Functional Modelling to predict if the target position is successfully ablated.
Inform the user or stop (9:3).
FUS Probe
Motion Modeling
FUS Modeling
Therapy Planning
Sonication Controller
1) Get target position
2) Update Motion Model, get predicted correction for target position
3) Get FUS probe parameters for pos
4) Set params, apply FUS
MRI Imaging
Functional Modeling
Sonication Controller
Visualization
1) Read thermal field
2) Visualize thermal field
3) Warn/stop if ablated
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 20 of 29
Figure 10. Simulation of FUS and thermometry.
Figure 10 shows what happens in the hardware layer when we operate in simulation mode. In this
case, the thermal field is simulated or retrospective thermometry data is loaded if available.‐
UC1b:PresettargetedsonicationThis is sonication using the currently available FUS probe that can step through 32 predefined
positions to allow for following a moving target.
This use case can be summarized as follows:
1. Initialize as in UC1a steps 1‐3.
2. Get a breathing pattern for the target position. Set the resulting time‐variant target position in
the FUS probe. The probe position will move according to this pattern. (Figure 11)
3. Continuously until sonication volume is ablated (Figure 12 and Figure 9):
3.1. Update the MotionModel according to real‐time data.
3.2. Find the motion‐corrected target position. Abort if mismatch with current FUS probe
position is too large.
3.3. Apply FUS.
FUS Probe 1) Set params, apply FUS
MRI Imaging 2) Request thermal field
probe settings
Thermal map (same spacial and temporal resolution as MR thermometry)
FUS Modeling
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 21 of 29
Figure 11. The preset positions step of UC1b.
Figure 12. The sonication step of UC1b.
UC2:PlanningofinterventionUsing the system, a physician is able to plan a focused ultrasound surgery.
This use case can be summarized as follows:
1. Initialize as in UC1a steps 1‐2.
2. Segment structures of interest, such as lesion and ribs.
3. Define target volume based on segmentation data, suggest several possible robot positions.
4. Iteratively until a plan is accepted by the physician (Figure 13):
4.1. Generate plan proposals.
4.2. Simulate proposals and visualize outcome
4.3. Manually adjust plan proposals
FUS Probe
Motion Modeling
FUS Modeling
Therapy Planning
Sonication Controller
1) Get target position
2) Update Motion Model, get time‐variant target position.
3) Get FUS probe parameters for all positions.
4) Preset all positions, apply FUS.
Preset:
FUS Probe
Motion Modeling
Sonication Controller
1) Update Motion Model, get motion‐corrected target position 2) Get current target position
3) If current probe position different from predicted position: Abort.
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 22 of 29
Figure 13. Create a treatment plan.
DataSpecificationDetailed description of the data used in the FUSIMO project. The data will be implemented within
the PaSM structure.
Volunteerdata‐Genericmotionmodel
4DMRIProposed MRI protocol:
bSSFP sequence
at least 2mm inplane resolution, ideally isotropic
flip angle 70°
parallel imaging factor of 2 or 3 (depending on coil setup)
acquire 2D slices sequentially, if no retrospective gating can be used
acquire 3D data, when using retrospective gating on breathing signal (will be ultrasound tracking data, once available)
acquire multi‐slice 2D data along with respiratory signal for offline retrospective reordering.
Therapy Planning
Planning Controller
1) Generate plan proposals
FUS Modelling
Functional Modelling
Therapy Prediction & Monitoring
Target Volume
Entry Points / Robot Positions
Patient Data
Simulation Therapy Plan
Visualization
2) Simulate therapy plan 3) Visualize plan outcome
after each adjustment
5) Select plan
4) Adjust plan parameters
Motion model
GUI
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 23 of 29
Planningscan Individualize motion model to patient ‐ 3D or 4D MRI
Proposed MRI protocol:
bSSFP sequence
Similar to 4D‐MRI setup (see above)
Acquire only limited number of slices (1 or 2?)
High temporal resolution
Several “typical” breathing cycles
Can maybe completely be replaced by US‐tracking?!
Segment risk and problematic structures (ribs, air, …)
Segment tumor
Segment different tissues? (liver,…?) Proposed MRI protocol:
High resolution body protocol
Parallel imaging factor of 2 or 3
Potentially moving table technology
Discuss: single‐shot TSE or HASTE or bSFFP
Acquire in breathold along with motion detection to align with breathing phase
Segment vessels Proposed MRI protocol:
Best performance after administration of contrast agent (not wanted in this
project)
Test TOF‐angiography
Motiondetection US‐tracking data, biplane 2D, 3D + time (4D)
Navigator echos
Breathing belt/pillow
Comparison of all available methods
Cadaverdata Individual motion model: 4D MRI
Planning scan ‐ see above
Motion tracking
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 24 of 29
InterfaceSpecificationDetailed description of the interfaces to each external software module. The goal is to agree on
specifications that both sides of the interface can program to, i.e. with respect to :
<description>,
<functional interface>, and
<data formats>
MotionModelThe motion modeling software as specified in WP3. By ETH. Refer to the Project Description for
details.
Figure 14. The architecture from the Motion Modeling point of view. The operations are gathered from the sequence diagrams chapter. These operations form the basis for the interface specification.
InterfaceSpecificationOperation Description
connectToRealTimeUS(RealTimeStream) Set the Us Image stream, enabling the module
to adapt itself to it.
setVolume(VolumetricImage) Set patient data and segmented data describing
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 25 of 29
setSegmentation(Segmentation) the static model of the patient. Examples are
MRI data and segmentations of the liver and
kidneys.
adaptToPatientSpecificData () Use all available patient specific data and the
generic model to create a patient specific
motion model.
bool isAdaptedToPatientSpecificData() Return true if adapted to the patient specific
data
Prediction getPredictedPosition(Position, Time) Given a position in the static data set, provide a
motion‐corrected value for a given time. Note
that the definitions of Position/Prediction is not
restricted to a single point, is can also be a
volume.
MotionField getMotionField This interface is needed in case we want to
visualize the motion field, e.g. if the physician
wants to check whether it looks correct.
void adaptionFinished() Signal emitted when the adaption is finished,
as an alternative to checking
isAdaptedToPatientSpecificData(). The exact
nature of this callback is implementation‐
dependent.
The following data types are used in the communication with the MotionModeling Module:
Data Type In/Out Description
RealTimeStream In A time‐varying data stream. It is possible to retrieve
data and positions for a given time interval from this
stream.
VolumetricImage In A representation of a volumetric data set (for
example MRI).
LabelImage In A representation of a segmentation, with the
segmentation labelled with a specific value in each
voxel. (for example a tumor segmented from a
volume).
Position In A representation of one, several, or a volume of
positions (Region of Interest) in the static volume.
Time In A definition of either a point in time or a time
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 26 of 29
interval.
Prediction Out The result of a calculation in the MotionModeling
module. It represents the predicted motion of a
Position (i.e. point or volume) over a given time
interval. This can be represented by a time‐varying
vector field.
In the simple case, this can be a vector field at one
point in time.
MotionField
FUSModelingThe FUS modeling software as specified in WP4. By INS. Refer to the Project Description, or the
specification document “LEAN” INTERFACE FOR US MODEL by INS for details.
Figure 15 The architecture from the FUS Modeling point of view. The operations are gathered from the sequence diagrams chapter. These operations form the basis for the interface specification.
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 27 of 29
InterfaceSpecification
Operation Description
setTherapyPlan(TherapyPlan) Set All probe positions and target positions
describing the full path of the probe.
setDigitalPatient(VolumetricImage, TimeVariation,
TempMap)
Set a 3D description of the volume of
interest (MRI data), the movement of this
data over time, and a temperature map
describing the initial temperature
distribution.
TemporalTempMap
getTimeDependentThermalDistributionInVolume()
calculate the heat transfer in a volume.
The following data types are used in the communication with the FUSModelling Module:
Data Type In/Out Description
TherapyPlan In All probe positions and target positions describing
the full path of the probe.
VolumetricImage In A representation of a volumetric data set (for
example MRI).
TimeVariation In A motion field describing the movement over time of
the volume, as computed by the Motion Model.
TempMap In The thermal distribution in the volume
TemporalTempMap Out The thermal distribution along time in the volume.
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 28 of 29
FunctionalModel(firstdraftsketch)The functional modelling software as specified in WP5 by FME. Refer to the Project Description for
details.
Figure 16. The architecture from the Functional Model point of view. The operations are gathered from the sequence diagrams chapter. These operations form the basis for the interface specification.
InterfaceSpecification
Operation Description
setCurrentTemperatureMap(tempMap) Set an image with current temperature inside
the body
setCurrentFunctionalityMap(functionalityVolume) Set an image with current state of tissue
inside the body
setPerfusionMeasurement(perfusionMeasure) Set some perfusion measurements. A
perfusion model is then initialized with these
measures.
PerfusionVolume getCurrentPerfusionMap() Get a volume giving amount of perfusion at any point inside the volume in [ml/gr/sec ]
setLargeVesselSegmentations(Segmentation) Set a mask giving large vessels transporting
blood/bile/other fluids. This is needed to build
D2.1 Draft ‐ Architecture for hardware, software, and data Released
FUSIMO Project, 270186, 2011‐12‐30 Page 29 of 29
the supply and drainage model. Different
vessel trees need to be separately given.
functionalityVolume
getUpdatedFunctionalityMap()
The volume gives an estimate of the tissue
functionality at all points. Between 0 and 1, 0
meaning no functionality, 1 meaning baseline
functionality.
The following data types are used in the communication with the Functional Model Module:
Data Type In/Out Description
functionalityVolume out The volume gives an estimate of the tissue
functionality at all points. Between 0 and 1, 0
meaning no functionality, 1 meaning baseline
functionality.
TempMap In Temperature image
perfusionMeasure In Could be a inflow measure at the root of a vesseltree
or a map giving rough perfusion information at each
position
PerfusionVolume out volume giving amount of perfusion at any point
inside the volume in [ml/gr/sec ]
References1. [AnnexI] FUSIMO Grant Agreement, Annex I – Description of Work