VMTK: Visual Manipulation Tool for 3D MR Neuro-Images

[Short Description] [Screeshots] [Videos] [Documentation] [Publications] [Download] [Developers] [Acknowledgements]

This project aims at implementing an interactive visualization tool which permits a physician to diagnose and to make pre-surgical planning for removal of cortical lesions. The cortical lesions are structural abnormalities (identifiable in original high-quality image data by a skilled neurologist) and seated in the outermost sheet of the brain. The key feature of this tool is that it allows a neurologist to peel off the cortical area in layers parallel to the scalp in search of those lesions. This provides a more accurate view of the size and the localization of lesions and improves the decision making during the pre-surgical phase. The pictures illustrate a finding with our prototype: an abnormal pattern of gyration in the right parietal lobe, both in 3D and in 2D.

Short Description

The workflow of our prototype is comprised of the following tasks: data acquisition; data visualization and exploration; 3D data erasing and cropping; and measuring of the location and the extent of resection.

After medical images in DICOM file format being imported into the system, one of the following three interaction states may be selected: 3D erasing, curvilinear reformatting, and measuring. Besides, manual control is given to a neurosurgeon for adjusting the transfer function parameter values in accordance with her/is need. The inspection itself may be performed either with one of three standard 2D views, sagittal, coronal and axial, or with 3D-rendition. In a 2D view, the physician may evaluate a stack of images slice-by-slice in one of three mutually orthogonal directions, while in the 3D view, the data volume may be displaced, rotated, or zoomed. All 4 views are coordinated, in the sense that if the clip or draw check box is activated in the 2D view, one may see the corresponding cut planes in the 3D view.

Some Screenshots

This image illustrates the user interface of our prototype. It consists of a top toolbar from which menus may be enabled, canvas and panels. The center canvas shows the 3D-rendition of the loaded 3D MRI image, while the three smaller canvas on the right side present the coronal, sagittal, and transverse planes of the brain. The left side is the panel for parameter setting of each operation mode. There are three operation modes: 3D erasing, curvilinear reformatting, and measuring. In this image the current mode is the curvilinear reformatting one. The transfer function editor lies in the lower left corner. Observe that the volume sizes are shown on the top of each 2D view canvas.
This image shows the control panel for 3D erasing. Through this panel the user can set the scalar value to be considered as the one of the scalp (Noise threshold). All values less than this threshold are ignored until the scalp. In this image, the chosen threshold is 827. To remove locally the noises, a "brush tool" may be used. The brush size is adjustable by the slider on the bar below the top tool bar. In this picture it is set at 4mm. The visual feedback is provided by a yellow shade at the tip of the cursor and all the visible noises in the painted region are removed. In the current version, if there are more than one noise sample along the picking ray, the user needs to pass the cursor over the region several times to remove them completely (see the video).
This image depicts the control panel for curvilinear reformatting. First, the user should select the region of interest by painting on the scalp. Second, the button "Crop" should be pressed to perform cropping in a fixed depth. Then, through the depth slider the user may visualize the intermediary slices. In this image the cropping depth is 27mm. "Reset" button may be applied to re-initialize the control volume and the original volume data is restored (see the video).
In our prototype the user controls separately the transfer function from the image scalar values to the grayscale values and to the opacity. This pictures shows the transfer function of the grayscale values while the image presents the transfer function of the opacity. Both transfer functions are employed in the volume rendering. The user may design her/his own transfer function by inserting or deleting the control points in the graphs. In this case, five and three control points have been inserted in the opacity and grayscale transfer functions, respectively (see the video).
This image highlights the control panel for measuring. Three classes of elements are distingushed: points of reference (magenta), points of interest (yellow) and lines (green). Whnever a new element is defined, the user should name it in the TreeControl widget. Automatically, the distance between each pair (point of reference, point of interest) and the length of lines are computed. These measures can be read from the measurement table that is popped up when the user clicks on the "View Measurements" button. In this picture, three points of reference and two points of interest have been defined to measure the distance of the suspicious region with respect to the scalp and the glabella. Moreover, two lines have been drawn on the brain to estimate its size and the perimeter of the suspicious region. We remark that in the measurement table there is only one entry for each line. The value of this entry is the length of the corresponding line.

Some Videos

The Microsoft MPEG-4.4.1 codec (Windows|Linux) should be installed to see the videos.
This video shows how the erasing is performed.
AVI (14.8 MB)
This video shows how the curvilinear cropping is performed.
AVI (12.1 MB)
This video shows how the measuring is performed.
AVI (11.9 MB)
This video shows how the transfer function editor works.
AVI (10.5 MB)
This video shows how the system stores and restores a session data in VMTK file.
AVI (9.4 MB)
This video shows how the system works with translucent volume data (lower opacity).
AVI (12.5 MB)

Documentation

The prototype is implemented in C++ with a volume ray-casting fragment shader written in GLSL. The wxWidgets GUI library was used for implementing the interface and the open source Grassroots DiCoM library for reading and parsing Dicom medical files. We also used wxFormBuilder for designing our GUI for wxWidgets. The documentation of the source code of this tool was generated by doxygen. Its complete online version is accessible from here.

Publications

Download

To run the executable file you should be attentive to the system requirements (programmable GPU with >=1GB graphics memory and supports OpenGL-GLSL > 3.0), download one of the following files and uncompress it in a directory:

Data sets: patient1.zip, patient2.zip, patient3.zip (Philips-Achieva 3T), patient_elscint.zip (GE-Elscint)

If you have some suggestions, please let us know by sending an e-mail to ting at dca dot fee dot unicamp dot br.

Developers

Acknowledgements

Without the valuable feedback from our colleagues at the Hospital de Clínicas of University of Campinas, this prototype would not have been accomplished. In particular, we would like to mention Dr. Clarissa L. Yasuda, Dr. Andrei Joaquim, Dr. Enrico Ghizoni, and Prof. Dr. Fernando Cendes. The researcher Ting also acknowledge Prof. Dr. Thomas Ertl (University of Stuttgart), Prof. Dr. James C. Gee (University of Pennsylvania) and Prof. Dr. Carlos A. L. D'Ancona (University of Campinas) for calling her attention to the instigating interaction problems with volumetric data. This project received financial supports from Fapesp and CAPES.