Registration and Fusion with Mutual Information for
Information Preserved Multimodal Visualization
Profa. Wu, Shin-Ting
DCA - FEEC - UNICAMP
This work concerns Valente's Master's thesis in Electrical Engineering.
It is part of the The Manipulation Toolkit (MTK) project.
![]() |
![]() |
![]() |
![]() |
(a) | (b) | (c) | (d) |
(click to expand) |
During diagnosis or surgical planning, visualizing different images in a combined form can be helpful in assessing multiple modalities. To accomplish that, the images must be spatially aligned (registered) and their intensities must be combined (fused), preferably in a way that each modality is still identifiable.
We propose the use of techniques based on Information Theory for each step available in literature to better integrate them in a common framework.
Contents
-
Comming soong.
References
[1] A. Collignon et al., Automated multi-modality image registration based on information theory. Kluwer, 1995.
[2] R. Bramon et al., Multimodal data fusion based on mutual information. IEEE TVCG, vol. 99, 2011. doi:10.1109/TVCG.2011.280