Linear 3-D Registration Tools

These registration programs estimate transformations that map points from reference image to floating (target) image coordinates. These are then applied to provide the reverse mapping to bring floating image values into the reference coordinate system by the programs mmtrans and warpimF, respectively.

Multi-Modality Volume REGistration: mmvreg

Affine multi-modality registration tool. Calculates rigid or rigid + scaling or rigid + scaling + skew between two image volumes.

Example command line options:

mmvreg  refImage.gipl floatImage.gipl -dofout ref2float.dof -nosave

Here each of the .gipl files can also be analyze, or nifti format images in stored either 16 or 32 bit integer or 32 bit floating point data format.

The "-dofout ref2float.dof" saves the affine transformation parameters (default 6) in a dof text file. The "-nosave" option prevents application of the transformation to the image files (so just the parameters are saved).


Alternatively you can use

mmvreg  refImage.gipl floatImage.gipl float2refOut.gipl -dofout ref2float.dof

which transforms the floating image into reference coordinates (using by defualt linear interpolation) and saves it in the the file "float2refOut.gipl".


Other mmvreg options include:


-refsamp <num>

-transamp <num>


where the numbers specify the finest cubic voxel dimensions to use when estimating the transformation.




estimate a nine parameter (rigid + X,Y,Z scaling) transformation between images.


-dofin filename.dof


specify a starting transformation from which to initiate the registration (useful for example if you know one image is transaxial while the other is coronal)


Note: to use the Midas project transformation format use:

-midasTPin filename.txt


-midasTPout filename.txt

in place of -dofin and -dofout

Padding Values: Ignoring voxel regions in the images.

These can be specified for the reference and transform (floating) image using:

-padvr <number>

-padvt <number>


by default mmvreg attempts to estimate padding regions and values from the raw image data. These allow manual specification of the padding values in the image data (values indicating voxels to be ignored in the registration process). It is important not to use padding values for image data that can occur in the real image data, otherwise these voxels will be ignored.


It is not advised to have significant numbers of padding value voxels in the transform/floating image!

Using padding and masking significantly reduces the capture range of the registration algorithm since it reduces the effective field of view of the image.


An additional mask can be applied to the Reference image using the option:

-RefMask maskImageName.gipl


This will be applied to the reference image and anything outside the mask (the mask image voxels with values less than or equal to zero) will be removed.

No Padding:

If the images are synthetic (have uniform regions of intensity) or you know there are no pad values in the image, you can enforce no padding in either image by using:



To avoid estimating padding regions in the transform/floating image use:


Image Scales for Registration:

Specified by:

-l <number>


This sets the starting level to use when doing the registration. By default mmvreg creates an image 'pyramid' with level 0 having the smallest voxels. It starts at the top of the pyramid (largest voxels) and re-estimates image transformations at each of the levels to refine the mapping. This option allows the staring level to be enforced to a lower or higher level (for example "-l 0" can be used to carryout small refinements of the alignment only by ignoring coarse images in the pyramid). Normally the highest level chosen is limited by the number of large voxels in the image at the coarser resolutions.

Multi-Modal volume TRANSformation: mmtrans

Tool to apply affine transformations to medical image volumes (3D and 4D) using user selectable interpolation. Can be supplied with multiple affine transformations which are concatenated together.

Basic command line example:

mmtrans refImage.gipl floatImage.gipl float2refOut.gipl ref2float.dof

This transforms "floatImage.gipl" to "refImage.gipl" coordinates and sampling resolution using the default linear interpolation using the transformation parameters stored in ref2float.dof.

Warning: resampling this does not correct for undersampling due to the refImage voxel size being much larger than the floatImage spatial resolution (which may introducing aliasing effects), but is a reaonable option for most data that does not have very different resolutions.

Other options include:

-InterpType 0|1|2|3


0 = nearest neighbour interpolation

1 = linear interpolation

2 = cubic interpolation

3 = cubic B-Spline Interpolation



Invert the linear transformation before applying it.


-PadVal <number>

Where <number> specifies the value to store in voxels for which there is no corresponding voxel in the floating image (i.e. the points where the reference image falls outside float image).

-dofin2 dofilename.dof


Specify an additional linear transformation which is composed onto the first transformation to create a maping from reference image to the floating image.