Skip Navigation

Multimodality Quantitative Image Analysis

Author:  Baowei Fei, PhD

At QBIL, we are interested in synthesizing the information obtained from multiple imaging modalities and sources in order to study disease mechanisms and/or to aid in making clinical decisions. Our research goals are to

  1. provide efficient methods and procedures for mapping the properties of tissue in space and time,
  2. integrate multiple information streams acquired from different imaging technologies into a single coherent picture, and
  3. validate and interpret in vivo imaging data for biologic, physiologic, and pathologic interpretation.

The research will combine multimodality imaging and multidimensional data to exploit our current knowledge of the genetic and molecular bases of various diseases and therefore to have substantial positive implications for disease prevention, detection, diagnosis, and therapy.

Image Registration: In medical applications, multiple images are acquired from the same subject at different times or from different subjects. The critical stage for utilizing these images is to align them in order to visualize the combined information. Image registration includes the processing to resolve the mapping between images so that the features or structures in one image correspond to those in the other image. The transformation between two image scenes can be of either rigid or non-rigid. Rigid body transformation has six degrees of freedom in three dimensions, i.e. three translations and three rotations. For most body organs, the motion is non-rigid and requires more degrees of freedom to accurately describe tissue motion. Therefore, deformable registration becomes necessary for many image applications.

One example is the registration of pre- and post-contrast enhanced breast MRI images. Deformable registration is required for this application as soft tissue, such as breast tissue, always undergoes non-rigid motion between images. Other similar applications can be found during medical imaging diagnosis where different modality images or the same modality at different times, are acquired and require deformable registration because of the non-rigid tissue deformation between images. Examples include heart imaging and chest PET-CT imaging where non-rigid motion can be a major tissue motion source. In image-guided radiation therapy, because of the type of treatment or patient respiration, a non-rigid shape or position change in an organ is unavoidable. Deformable image registration is critical for delivering an appropriate radiation dose and in order to avoid the damage to adjacent healthy tissue. An example is image-guided radiation therapy of prostate tumors or tumors in other organs.

Thin-plate spline (TPS) based deformable image registration: We created and evaluated an almost fully automated, 3D non-rigid registration algorithm using mutual information and a thin-plate spline (TPS) transformation for MR images of the prostate and pelvis.

In the first step, an automatic rigid-body registration with special features was used to capture the global transformation. In the second step, local feature points were registered. An operator entered only five feature points (FPs) located at the prostate center, in the left and right hip joints, and in the left and right distal femurs. The program automatically determined and optimized other FPs on the external pelvic skin surface and along the femurs. More than 600 control points were used to establish a TPS transformation for deformation of the pelvic region and the prostate. Ten volume pairs were acquired from three volunteers in the diagnostic (supine) and treatment positions (supine with legs raised).

Various visualization techniques showed that warping rectified the significant pelvic misalignment caused by the rigid body method. Gray-value measures of registration quality, including mutual information, correlation coefficient, and intensity difference, all improved with warping. The distance between prostate 3D centroids was 0.7 ± 0.2 mm following warping compared to 4.9 ± 3.4 mm with rigid-body registration. The semiautomatic non-rigid registration works better than rigid body registration when the patient position is changes significantly between acquisitions; it could be a useful tool for many applications of prostate diagnosis and therapy.

B-spline based deformable image registration: We implemented mutual information B-spline deformation registration algorithms. Mutual information does not assume a linear intensity relationship between images and has been used for registration of images of either the same modality or different modalities. A motion constraint is optimized in order to achieve a smooth function instead of an unrealistic result. A gradient-based minimization method is used to find the B-splines control coefficients for optimal transformation. Multiresolution strategy is applied to register the image from the downsampled low-resolution image to the original high-resolution images. The number of control points also hierarchically increase along the multiresolution framework. The deformation computed at low resolution is the initial transformation for the optimization at the high resolution.

Automatic multimodality image registration software program: The 3D registration program can align image volumes from CT, PET, MRI, and/or other imaging modalities. It has the following features:

  1. Resize the floating volume according to the reference volume size and resolution so as to keep the same volume resolution in every direction for the two volumes;
  2. Automatic registration based on mutual information;
  3. Manual registration using 3D translation and rotation;
  4. It includes two deformable registration approaches;
  5. Displays volume in all directions as well as the fusion results; and
  6. Displays the location line in 3D space and easily locates each point in 3D space. The user interface is friendly and direct.

Slice to volume image registration: Slice to volume registration is used to register a two-dimensional image slice to a three-dimensional image volume. In this study, we registered live-time interventional magnetic resonance imaging (iMRI) slices with a previously obtained, high resolution MRI volume which in turn can be registered with a variety of functional images, e.g. PET and SPECT, for tumor targeting. We created and evaluated a slice-to-volume registration algorithm with special features for its potential use in iMRI-guided, radiofrequency (RF) thermal ablation. The algorithm features included a multi-resolution approach, two similarity measures, and automatic restarting in order to avoid local minima. Imaging experiments were performed on volunteers using a conventional diagnostic MR scanner and an interventional MRI system under realistic conditions. Both high-resolution MR volumes and actual iMRI image slices were acquired from the same volunteers. Actual and simulated iMRI images were used to test the dependence of slice-to-volume registration on image noise, coil in homogeneity, and RF needle artifacts. To quantitatively assess registration, we calculated the mean voxel displacement over a volume of interest between slice-to-volume registration and volume-to-volume registration, which was previously shown to be quite accurate. More than 800 registration experiments were performed. For transverse image slices covering the prostate, the slice-to-volume registration algorithm was 100% successful with an error of < 2 mm, and the average and standard deviation was only 0.4 mm ± 0.2 mm. Visualizations such as combined sector display and contour overlay showed excellent registration of the prostate and other organs throughout the pelvis. Error was greater when an image slice was obtained at other orientations and positions, mostly because of inconsistent image content such as that obtained from variable rectal and bladder filling. These preliminary experiments indicate that MR slice-to-volume registration is sufficiently accurate to be able to aid image-guided therapy.

Automatic 2D to 3D image registration software program: This project implements a three-dimensional (3D) to two-dimensional (2D) registration for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. In order to utilize CT as the “gold standard” to evaluate the ability of DR images to detect and localize calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, including Gaussian-weighted projection, threshold-based projection, and average-based projection. Cross correlation (NCC) and normalized mutual information (NMI) are used as the similarity measurement. The software program has the following capabilities:

  1. Simulate DR images from the reference CT volume at any angle and generate the projection image using Gaussian-weighted projection, threshold-based projection, and average-based projection methods; and
  2. Perform registration between the original DR images and the DRR image reconstructed from the CT volume.

Image classification:  Image classification is a process which partitions an image into a set of distinct classes with uniform or homogeneous attributes such as textures or intensity. Classification of images can be challenging because they are affected by multiple factors such as noise, poor contrast, intensity inhomogeneity, and partial volume effects.

Classification methods can be categorized as supervised and unsupervised methods. Supervised classification methods depend on the examples of information classes in the images and derive a prior model or parameters from the training sets. Unsupervised methods examine the images without specific training data and divide the image pixels into groups presented in the pixel values according to the classification criteria.

Image Segmentation:  The improved minimal path segmentation method is an automated, model-based segmentation method for medical Images. The method’s energy function combines distance and gradient information to guide the marching curve and thus evaluate the best path. Dynamic programming was used to automatically optimize and update end points in the procedure for searching curves. A deformable 3D model was generated as the prior knowledge for selecting the initial end points and for evaluating the best path. The software program has the following features:

  1. define a ROI and then search the contours of the object within the ROI;
  2. automatically segment multi-slice;
  3. calculate the quantitative value such as the mean intensity, standard deviation, pixel number, area of each slice, and the whole volume,
  4. incorporate a prior model as an initial contour into the segmentation, and
  5. generate a prior model using samples.
Image Analysis Software Package
Quantitative Image Analysis Software Package

Attenuation Correction: Multimodality imaging, such as PET/MRI and PET/CT, is an emerging research field. In PET imaging, attenuation correction (AC) accounts for the radiation-attenuation properties of tissue. It has been established that PET images reconstructed without attenuation correction can introduce severe artifacts. Attenuation correction is mandatory in order to obtain PET images that are sufficiently accurate for quantification. Usually, in stand-alone PET scanners, the attenuation correction is obtained from a transmission scan using one or several moving sources. In combined PET/CT, the attention correction is derived from the CT information. Unfortunately, CT does not provide the excellent soft-tissue contrast that MRI does, it adds a significant radiation dose, and it does not allow truly simultaneous imaging. In the case of a PET/MRI scanner, there is insufficient space for a rotating source and the attenuation map must be determined differently. Ideally, one would like to obtain the attenuation map directly from the anatomic MR images.

In our group, we developed an MRI-based attenuation correction method for PET imaging. As shown in the following figure, the graphical user interface (GUI) includes all adjustable parameters in the algorithm and can show the processing result at every step, thus allowing users to easily test and use this algorithm. The program can read and show MRI, PET images, and sinogram. It can display segmented and classified MRI images, registered and fused PET/MRI images, and attenuation maps. It includes FBP and OSEM methods for reconstruction. This software program can be directly used for many combined PET/MRI applications.

Project Team

Baowei Fei, PhD, EngD