PROJECT :           3D MEDICAL DATA VISUALIZATION TOOLKIT

| Overview | Direct Volume Rendering |  Texture Slicing | Ray Casting | Transfer Function | Authors |

| Implementation

| 3D Textured Slice | Single Example | Local Illumination 1 | Local Illumination 2

| Ray Casting | Single Example |

| GUI Project with QT |  Download | Authors |

 



 

       I. OVERVIEW

3D Medical Data Visualization Toolkit offer numerous visualization methods within a comprehensive visualization system, providing a flexible tool for surgeons and health professionals as well as the visualization developer and researcher.

Our system offers the possibility to visualize 3D medical images from various devices, Computed Tomography, Magnetic Resonance Imaging and others.

Basedon direct volume rendering with textures and ray casting, offers high image resolution, provides interactive navigation within the volume through the movement of coronal, sagittal and axial planes, with direct applications to medical diagnosis.

Tis project was performed as part of the requirementof the Course: 3D Visualization – IA369, lectured by Wu Shin Ting, at School of Electrical and Computer Engineering, State University of Campinas.

 

      II. DIRECT VOLUME RENDERING

In a nutshell, the Direct Volume Rendering, hereafter DVR, maps the 3D scalar field to physics quantities (color and opacidade) that describe light interactions at the respective point in the 3D space. The  visualizations can be created without creating intermediate geometric structure, such as polygons comprising an isosurface, but simply by a “direct” mapping from volume data points to composited image elements.

 

            Formally, a DVR can be described as function that visually extracts information from 3D scalar field:

                                    

            ie, a function from 3D space to a single-scalar value.

 

          1.1. SCALAR DATA VOLUME.

            Scalar data volume (3D data set) can come from a variety of different areas of applications. Currently, the images acquired by medical diagnostic processes are the main 3D data generated. This medical imaging is performed by CT (Computerized Tomography), RMI (Resonance Magnetic Imaging), Nuclear Medicine and Ultrasound.}

            All this medical imagingmodalities have in common that a discretized volume data set is reconstructed from the detected feedback (mostly, radiation).

            In addition to thesesources, simulations also generate data sources for volume rendering, ie computational fluid dynamics simulations and simulations of fire and explosions for especial effects. I don’t forgot said that the voxelization also generate data sets.

            The next figure shows anexample of volume data set:

 

Figure 1. (Left) Volume data set given a discrete uniform grid. (Right) Final volume rendition with its Transfer Function

 

In a regular volumetric grid, each volume element is called voxel (Volume Element), that’s represent a single value that is obtained by sampling the immediate area surrounding the voxels.

 

           1.2. COMPOSITION.

All DRV algorithms perform same core composition scheme: either, front-to-back composition or back-to-front composition. Basically, the difference is where the origin of the ray traversal.

 

1.3. ALGORITHMS FOR DIRECT VOLUME RENDERING.

Volume rendering techniques can be classified as either object-order or image-order methods. The object-order methods scan the 3D volume in its objects space, then project it onto the image plane. The other strategy, use the 2D image space (image plane) as the starting point for volume traversal.

In the project, we implemented Texture Slicing and Ray Casting algorithms, for object-order and image-order techniques, respectively.

 

1.3.1.  Texture Slicing:

 Until a few years ago, texture slicing was the dominant method for GPU (Graphic Process Unit) based volume rendering. In this algorithm: 2D slices located in 3D object space are used to sample the volume. The slices are projected onto the image plane and combined according of composition scheme.  Slices ordered either in a front-to-back or back-to-front fashion and the composition equations has to be chosen accordingly.

 

Figure 2. Object Order Approach (Left) Sampling data set. (Right) Slices textured obtained.

 

Figure 3. Object Order Approach (Left) Proxy geometry – polygonal slices. (Right) Final volume rendering.

 

The main advantage of this method is that it is directly supported by graphics hardware because it just needs texture support and blending (composition scheme). One drawback is the restrictions to uniform grids.

 

The basic component on texture slicing is built upon:

 

 1.3.2. Ray Casting:

Ray casting is the most popular image-order method for volume rendering. In this algorithm we composite along rays that are traversed form the camera.  For each pixel in the image, a single ray cast into the volume, then the volume data is resampled at discrete positions along the ray. The natural traversal order is front-to-back because the rays are started at the camera, but also the way the ray traversal can be adversely.

 

 

                            Figure 4.Image Order Approach: Ray Casting.

           

At final rendering only 0.2% to 4% of all fragments are visible, then ray casting in an appropriate approach to address these issues. Because the rays are handled independently from each other, ray casting allows several optimizations:

 

Moreover, ray casting is very good for uniform grids and tetrahedral grids.

 

Pseudo code of ray casting:

 

Determine volume entry position

Compute ray direction

While (ray position into volume)

Access data value a current position

Composition of color y opacidade (apply transfer function)

Advance position along ray

End While

 

           

            1.4. TRANSFER FUNCTION 

The main objective of volume rendering is extract information over thesescalar values in the 3D grid, identifying features of interest.  In DVR the central ingredient  is the assignment of optical properties (color and opacidade) to the values comprising the volume dataset. This is the role of the transfer function.

 

As simple and direct as that mapping is, it is also extremely fexible, because of the immense variety of possible transfer functions. However, that flexibility becomes a weakness because of the difficulty to se appropriately the the most important parameter for producing a meaningful and intelligible volume rendering

 

In the simplest type of transfer function, the domain is the scalar data value  and the range is color and opacity, by example, a 1D transfer function maps one RGBA value for every isovalue [0, 255] of scalar data set.

 

Transfer functions can also be generalized by increasing the dimension of the function’s domain. These can be termed multidimensional transfer functions. In scalar volume datasets, a useful second dimension is that of gradient magnitude.

 

Formally, a transfer function could be describe as:       

       

The next figure show a simple 1D transfer function based on linear ramps between user-specifed control points.

 

Figure 5.1D Transfer Function.

From: http://graphicsrunner.blogspot.com/2009/01/volume-rendering-102-transfer-functions.html

 

The process of finding an appropriate transfer function is often referred to as classifications. The design of a transfer function in general is a manual, tedious an time- consuming procedure, that requires detailed knowledge of the special structures that are represented by the data set.

 

III. Implementations details

Our 3D Medica Data Visualization Tool  is on the two algorithms  most widely used in direct volume rendering. We implemented 3D Textured slicing and Single Pass Ray Casting based on GPU.

                    We recommend update drivers and libraries extensions with lasts versions.

 

3.1. 3D Texture Slicing Approach.

Texture 3D have several advantages over 2D texture-based volume rendering, caused by the fixed number of slices and its statics alignment within the object’s coordinates system.

Despitethe use of 3D textures, we need decomposing the volume object into planar polygons: 3D textures do not represent volumetric rendering primitives.

3D textures allow the slice polygons to be positioned arbitrarily in the 3D space according the viewing directions: is an viewport – aligned slices as displayed in the next figure.

 

    Figure 6.View aligned slices. Decomposition of the volume object into viewport-aligned polygon slices.

 

3.1.1. Geometry Set-Up

The procedure of intersections calculations between the bounding box and the stack of viewport-aligned slices is computationally more complex. Moreover, these polygonal slices must be recomputed whenever the viewing directions changes.

One method for computing  the plane-box-intersection can be formulate as a sequence of three steps:

  1. Compute the intersections points between the slicing plane and the straight lines that represent the edges of the bounding box.

  2. Eliminate duplicate and invalid intersections points.

  3. Sort the remaining intersection points to form a closed polygon.

We perform the plane-box- intersectioncalculation on the CPU and the resulting polygons slices are uploaded to GPU.

 

3.1.2. Compute the Plane-Box-Intersection

Compute the intersection between a cube and an arbitrary oriented plane is a complicated task. For complicate matter further, these intersections points results in a polygon with three to six vertices; these are illustrated in next figure.

 

    Figure 7.Polygons resulting form intersections plane-box. Polygon has between three and six vertices.

To facilitate the calculations, we use a plane in Hessian normal form:

Where Np denotes the normal vector of the plane and d is the distance to the origin.

 

Then the problem is reduced to compute the intersection of a plane with an edge (the edge -ray formed by two vertices Vi and Vj of the volume bounding box).

 

    Figure 8. Ray - Plane - Intersection

 

For resolving this case, we used the algorithm for Ray-Plane Intersection for all edges of the bounding box, based on the equation of the line:

With

Solved for

 

The denominator becomes zero only if the edge is coplanar with the plane, in this case, ignore the intersection. We have a valid intersection only

A detailed description of Ray – Plane- Intersection Algorithm you can find in 3D Computer Graphics: a mathematical introduction with OpenGL book of Samuel R.Buss.

 

    3.1.3. Implementatios details

 

    We probe worked two alternatives, first, manipulating the OpenGL Texture Matrix; second, applying transformations to move the camera around the object.