Introduction to PET Physics



[Contents] [Section 1] [Section 2] [Section 3] [Section 4] [Section 5] [Section 6]


6.Corrections for quantitative PET in 2D and 3D mode

6.1 Introduction
6.2 Attenuation correction
6.3 Correction for random coincidences
6.4 Scatter correction
6.5 Detector normalization
6.6 Dead-time correction

6.1 Introduction

PET offers the possibility of quantitative measurements of tracer concentration in vivo. However, there are several issues which must be addressed in order to realise this potential. These issues are discussed in this section, and some of the complicating factors associated with operation in 3D mode are introduced.
 

[top of page]



6.2 Attenuation correction
 

In 2D PET, attenuation correction factors are usually measured by illuminating the FOV with circular or rotating rod sources with the subject in the field of view. Sources containing quite large amounts of activity can be used to speed up the process, and scatter can be minimised by a technique called "rod windowing", whereby only LORs passing through the rod source are used for the transmission measurement (Thompson et al 1986). In certain cases it may be possible dispense with the measurement by using a calculated attenuation correction (Siegel and Dahlbom 1992), or to improve it by reconstructing the attenuation data and segmenting it into regions with similar linear attenuation factors. This segmented image may then be reprojected to obtain the attenuation correction factors for each LOR (e.g. Xu et al 1996).

With the septa retracted, the problem becomes more complex. If highly active transmission sources are used without septa, the detectors near the source will experience unacceptably large levels of dead-time. The amount of scatter also rises significantly, affecting the quantitative values of the attenuation factors. For cameras with septa, the septa may be extended into the FOV prior to the attenuation measurements being made. For septaless cameras, this is not an option. Work is still proceeding on the issue of 3D mode attenuation measurements. Promising avenues include the method of "singles attenuation correction", where a collimated source of photons of energy similar to annihilation photons is used in a manner analogous to the acquisition of an X-ray CT scan (Karp et al 1995). Segmentation may be used to reduce the errors due to scatter and the variation of linear attenuation with photon energy.
 

[top of page]



6.3 Correction for random coincidences

To obtain quantitative data in PET it is necessary to estimate and subtract the random coincidences from the measured data in each LOR to yield the sum of the true and scattered coincidences. As shown in section 2.5, the rate of random coincidences on a particular LOR is given by

        Rij = 2t ri rj                 (14)

where Rij is the random coincidence rate on the LOR defined by channels i and j, ri is the singles rate on channel i, rj is the singles rate on channel j and t is the coincidence resolving time. Therefore if ri and rj can be measured and t is known, Rij can be calculated for each line of response (e.g. Cooke et al 1984). This method has the advantage that in a particular acquisition the singles rates are generally much higher than the coincidence rates, so that the statistical quality of the estimate of Rij tends to be good. A more commonly implemented method for estimating the randoms rate in a particular LOR is the delayed coincidence channel method. Here timing signals from one detector are delayed by a time significantly greater that the coincidence resolving time of the circuitry. There will therefore be no true coincidences in the delayed coincidence channel (although it is possible for an event from one true coincidence to be split from its partner and paired with an event from another), and the number of coincidences found is a good estimate of the number of random coincidences in the prompt signal. The estimate from the delayed channel may be subtracted from the prompt signal on-line, or stored as a separate sinogram for later processing. The advantage of this method is that the delayed channel has identical dead-time properties to the prompt channel. The disadvantage is that the statistical quality of the randoms estimate is poorer, as Rij is a much smaller quantity than ri and rj. A method for improving the noise characteristics of random coincidence estimates obtained in this way was described by Casey and Hoffman (1986), and characterised for 3D PET by Badawi et al 1999b.
 

[top of page]



6.4 Scatter correction

As stated in section 3.3, the sensitivity to scattered coincidences is greater in 3D mode than in 2D mode. In 2D mode, many workers ignore scatter altogether. However,  in 3D mode the amount of scatter in the signal can become extremely large (Cherry et al 1991, Badawi et al 1996), and accurate scatter correction methods are required. Many schemes have been proposed for scatter correction in 3D mode. These include convolution-subtraction techniques (e.g. Bailey and Meikle 1994, Bentourkia et al 1995), Monte-Carlo modelling techniques (e.g. Levin et al 1995), direct measurement techniques (Cherry et al, 1993) and multiple energy window methods (e.g. Shao et al 1994, Grootoonk et al 1996). The methods in widest use to date are the "Gaussian fit" technique (e.g. Stearns, 1995, Cherry and Huang 1995), and model-based scatter correction algorithms (Ollinger 1996, Watson et al 1996).

The Gaussian fit method consists of fitting a Gaussian profile to the scatter tails found at the edge of each projection. This works well in brain scanning, where the activity and the scattering medium is fairly uniformly distributed and concentrated in the centre of the field of view, resulting in a simple slowly changing scatter distribution. It fails in the body, as the scatter tails available for fitting are very much shorter (because the body occupies a large portion of the field of view) and the scatter distribution contains more structure.

The model-based scatter correction algorithms use the attenuation map obtained from a transmission scan together with the emission data and a model of the scanner geometry and detector systems to calculate the percentage of photons falling on each detector, using the Klein-Nishina formula. The Klein-Nishina formula gives the differential scattering cross-section ds /dW as a function of scattering angle q as follows:

 (1.15)

where E is the energy of the incident photon, m0c2 is the rest mass of the electron, r0 is the classical electron radius and Z is the atomic number of the scattering atom. Since the original emission data contains scatter, the correction method must be applied iteratively. In the case where all the activity is contained within the field of view, these methods are highly accurate. However, where there is activity outside the field of view these methods start to fail. In whole-body scanning, it may sometimes be possible to obtain emission and attenuation data for most of the regions contributing to scatter, thus improving the accuracy of the scatter estimate, but in general this is not a practical option.

Because of the limitations described, scatter correction in 3D PET remains an area of active research.

[top of page]



6.5 Detector normalisation

Fourier-based reconstruction techniques assume that all LORs have the same sensitivity. Unfortunately this is not the case for experimentally acquired data. For example, the sensitivity of a particular LOR is strongly affected by the angle that the LOR makes with the two detector faces at each end. This means that the sensitivity of the LOR relative to the mean is affected both by the geometry of the camera and the LOR position. Apart from such geometric effects, the block detectors themselves vary in efficiency, as the PMT gains are not all exactly the same (and may change with time), and the scintillation crystals are not all identical. The process of correcting for these effects is referred to as normalisation, and the individual correction factors for each LOR are referred to as normalisation coefficients (NCs).

The most straightforward way of obtaining a full set of NCs is to perform a scan where every possible LOR is illuminated by the same coincidence source. NCs are then proportional to the inverse of the counts recorded in any given LOR. This approach, known as direct normalisation, unfortunately has a number of disadvantages.  Scattered coincidences require a different normalisation to trues (Ollinger 1995), and direct normalisation does not yield these different factors. In 3D mode, small amounts of activity must be used to reduce dead-time effects (Liow and Strother 1995), and this means that the amount of time required to obtain sufficient counts for reasonable statistical accuracy in each LOR quite large (tens of hours). Since NCs can change with time and should be measured as part of routine quality control, this poses a significant practical problem.

Both these problems can be overcome in conventional PET cameras by using a component-based variance reduction method (e.g. Hoffman et al 1989). NCs are modelled as the product of intrinsic crystal efficiencies and a small number of geometric factors that account, for example, for the variation in crystal efficiency with photon incidence angle. The NCs are not all independent, as any given crystal efficiency is a factor for many NCs, and, if the geometric factors are accurately known, the number of unknowns is reduced from the number of LORs to the number of crystals. There is a trade-off between systematic errors and statistical accuracy that depends on the complexity of the model - however, for an ECAT 951R operating in 3D mode there are about 1.25 million LORs and just 8192 detectors, so the potential for variance reduction is very large (Badawi et al1998).

While component-based normalisation is a promising technique, it remains a developing field of study in 3D PET, and several authors have reported the presence of residual artefacts in images reconstructed from normalised acquisitions of uniform cylindrical phantoms (e.g. Bailey et al 1996, Oakes et al 1998, Badawi and Marsden, 1999a).
 

[top of page]



6.6 Dead-time correction

In both 2D and 3D mode, there will be losses due to detector and system dead-time. To obtain quantitative results, acquired data should be corrected for these losses. This is usually done by modelling the dead-time losses as a combination of paralyzable and non-paralyzable components and obtaining parameters for the model by means of experiments involving repeated measurements of a decaying source (e.g. Casey et al 1995).

As discussed in section 5.6, a feature of block-detector systems is event mis-positioning at high count-rates due to pulse pile-up. In 2D mode, mis-positioning due to pulse pile-up has been shown to be unimportant except at very high activity concentrations (Germano and Hoffman, 1990). In 3D mode, pile-up can lead to high-frequency image artefacts and quantitative error if normalization measurements are carried out at significantly different count-rates to the emission measurements. A first-order correction scheme for this effect has been described by Badawi and Marsden (1999c).

 
 
 
 
 
 
 

[top of page]

Last revised by:

Ramsey Badawi

Revision date:

12 Jan 1999