Multiple Coordinated Views in VMTK

[Short Description] [Videos] [Documentation] [Publications] [Demos] [Download] [License] [Developers] [Acknowledgments]

The usabilibty tests we performed with our previous versions of VMTK make us to understand that a precise identification of a subtle epileptogenic lesion is of extreme importance for the prediction of response to surgical resective treatment of a focal cortical dysplasia (FCD). Nevertheless, about one-third of patients with patologically proven FCD has apparently normal magnetic resonance (MR) scans. It is highly desirable to improve the sensitivity of MR by aggregating informations provided by other modalities and by conducting a needle-in-the-haystack search in an interactive visualization environment where neurologists are able to examine diverse exams simultanously. They wish to compare multimodal neuroimages in a coordinate and exploratory way.

This prototype is a response to this data analytics requirement. We propose to develop an environment comprising of several modality tabs. Not the views in the same tab (intra-views) but also the views among the tabs (inter-views) are coordinated, such as shown in the screenchots above. Note that they show fused views of MR and positron emission tomography (PET) scans. Also observe in the image on the left side that the focus highlighted in the 2D classical (coronal - rightmost bottom, sagital - leftmost bottom, and axial - rightmost top) views is also pointed by a cross-hair in the 3D view (leftmost top). The image on the right side illustrates how the multiplanar reformatting tool is integrated in this new version: whenever the position or the orientation of the clipping plane is changed, the clipped slice is automatically shown in the 3D view.

Features, as patient-oriented slice views, fused views of multimodalities, multiplanar reformatting and focus configuration, are presented in most medical applications -- in an independent, but not in an integrated, manner. This may hinder a thoroughly comparative visual analysis of multimodal exams. The key to our solution is

  1. to distinguish five reference spaces along the data processing from the raw data (in DICOM format) to the format of the displayed images. They are Patient-oriented coordinate reference (DCR), native coordinate reference (PhCR), normalized native coordinate reference (NPhCR), and texture coordinate reference (TCR).
  2. to move most of interactive visualization algorithms to GPU for reducing CPU to GPU data transfer latency. The raw data are loaded in the GPU memory with texture coordinates (TCR) and different views are derived directly from them on GPU. During user interactions only the updated control data Ω and the viewing data MVP are continually resent to GPU.

Videos

This video was presented in the 67th SBPC. The activities we proposed are here (in Portuguese).

Documentation

The prototype is implemented in C++ with a volume ray-casting fragment shader written in GLSL. The Qt GUI library was used for implementing the interface and the open source Grassroots DiCoM library for reading and parsing Dicom medical files. We also used Boost C++ libraries for multi-threading and The CImg Library for image processing.

The cross-platform development environment, Qt Creator, is used to implement our codes for Linux (Ubuntu 14.04), Mac OS X (≥ 10.9) and Windows 7. The documentation of the source code was generated by doxygen. Its complete online version is accessible from here.

Publications

Demos

This software has been presented in the following events:

Download

To run the executable file you should be attentive to the system requirements (programmable GPU with >=2GB graphics memory and supports OpenGL-GLSL >= 3.3), before downloading one of the following files and uncompressing it: Color palettes: palettes.zip (1.4 KB)

We have successfully loaded DICOM files from different scanners in our university hospital and some of these datasets, from which we successfully co-registered the available pair of anatomic and functional 3D images: CEREBRIX/Neuro Crane/t1_fl2d_tra and CEREBRIX/PET PETCT_CTplusFET_LM_Brain (Adult)/PET FET Cerebral.

If you have some problems, please let us know by sending an e-mail to ting at dca dot fee dot unicamp dot br.

License

The prototype is released under the
LGPL license:

Copyright (C) 2015 Wu Shin-Ting, Wallace Souze Loos, Raphael Voltoline Ramos and José Angel Iván Rubianes Silva.

This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version.

This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.

Developer Team

Acknowledgements

This project received financial supports from Fapesp, CNPq and CAPES.