Division of Nuclear Medicine
SimSET Bug Report:
Coherent Scatter Software Bugs

[News] [Installation guide] [User guide] [Programmers' info] [Resources] [Contacts]


Last revised by:

Steven Vannoy

Revision date:

13 Jan 1999

[Bug Description] [Test report for fix]

Bug Description

When testing a module we are adding to SimSET, we found several bugs in the photon tracking software of the 2.5 beta version you downloaded from our server. These bugs were introduced when coherent scatter was added to the software; they do not affect earlier versions of the software.

The bug that will have the greatest effect on the output data is active whenever the user does not turn coherent scatter on ("model_coherent_scatter = true" in the parameter file). Our intent, when not modelling coherent scatter, is to not model any interaction when coherent scatter would otherwise occur. Hence, the photon is passed on without scattering.

This is different from the pre-coherent scatter SimSET software, which modeled all scatter as Compton scatter. We introduced an error when making this change. When not modeling coherent scatter, the attenuation coefficient should be reduced by a factor representing the attenuation due to coherent scatter. Instead the attenuation coefficient is reduced by a factor that depends on the difference between the probability of an interaction being scatter and the probability of a scatter being Compton. The effect this has depends on the energy of the photon and the material the interaction occurs in. Thus, the exact effect on a given simulation is difficult to predict. Generally, the effect will be small in biologic tissues at energies above 250 keV, as there is only a small difference between the probability of an interaction being scatter and the probability of a scatter being Compton. The effect will be large in biologic tissue at energies below 150 keV and in detector and collimator materials at all energies.

A second bug with a potentially large effect occurs only when users choose to force non-absorption ("forced_non_absorption = true" in the parameter file) in the object and model coherent scatter ("model_coherent_scatter = true" in the parameter file). In this case coherent scatters are not modeled. This bug will have a large effect on simulations where a lot of coherent scatter is expected (i.e. at low energies and in detector and collimator materials).

Two other related bugs partially reduced the effect of the first bug described above. (1) In planar detectors we did not update the position of photons correctly when coherent scatter was being skipped. (2) When tracking through the object (tomograph field-of-view), we attempted an extra forced detection when coherent scatter was being skipped. These bugs are only active when the user does not use coherent scatter ("model_coherent_scatter = false" in the parameter file).

Finally, there was a bug in a statistic reported in the on-screen output. The statistic "Total photons passing through crystal without interacting" included coherent scattered photons in addition to photons that had not interacted.

We apologize for any inconvenience these bugs cause you. Unfortunately we introduced the two major ones when modifying the software to facilitate testing of the coherent scatter distribution. Tests that would have uncovered these bugs were performed prior to these modifications and we neglected to redo them after the changes were made.

 

[top of page]


 

Tests for "SubObjGetProbComptScatter2" Bug Fix

General Problem:

Through a series of changes introduced with the implementation of coherent modeling a bug was created which affected the way scatter data tables were being initialized based on whether coherent scatter was or was not being modeled. This error produced false scatter events when coherent scatter was not being modeled.

Problem Details:

When not modeling coherent scatter, the attenuation coefficient should be reduced by a factor representing the attenuation due to coherent scatter. Instead the attenuation coefficient was being reduced by a factor that depends on the difference between the probability of an interaction being scatter and the probability of a scatter being Compton. The effect this has depends on the energy of the photon and the material the interaction occurs in. Thus, the exact effect on a given simulation is difficult to predict. Generally, the effect will be small in biologic tissues at energies above 250 keV, as there is only a small difference between the probability of an interaction being scatter and the probability of a scatter being Compton. The effect will be large in biologic tissue at energies below 150 keV and in detector and collimator materials at all energies.

A second bug with a potentially large effect occured only when users chose to force non-absorption ("forced_non_absorption = true" in the parameter file) in the object and model coherent scatter ("model_coherent_scatter = true" in the parameter file). In this case coherent scatters are not modeled. This bug will have a large effect on simulations where a lot of coherent scatter is expected (i.e. at low energies and in detector and collimator materials).

Problem Resolution:

With coherent scatter modeling turned off, the fourth column in the attenuation table (the probability of a scatter being a Compton scatter) was being set equal to the third column (the probability of an interaction being a scatter). The solution is to se the probability of a scatter being Compton equal to one if coherent scatter is not being modeled.

Validation:

This document defines tests for each of the affected modules and presents the results of our simulations.


Test 1: For scatter modeled in the object (EmisList and PhoTrk modules).

Method: Verify that the expected number of photons pass through the object without interacting.

Part A

Simulate:
1,000,000 decays
No Importance Sampling
511 keV Photons, minimum = 511
Single photon
Coherent scatter off
Fixed Direction = true

Object:
Point source at (0,0,0) in a 4cm diameter cylinder of NaI
z-axis = 1cm

No detector
No collimator.

Binning:
1 bin, bin all photons which haven't scattered (or been absorbed, but they don't reach the binning module anyway).

Predicted Results

O = I(exp(-2mu*((1&endash; probScatter) + (probScatter * comptonToScatterProb))))
O = I(exp(-2*2.64424*((1&endash; .20421) + (0.20421*0.62292))))
O = I(exp(-2(2.64424*0.922996)))
O = I * 0.007588

Where I is the number of photons started and O is the number of photons binned. This can also be done using the weights, which you might want to save for parts B and C.

Actual Results:

O = 10000000*.0076 = 76000
Simulated (S) = 75807
S/O = 0.997

(S-O)/(sqrt(O)) = -0.7


Part B: Same as Test A, except with Forced Non Absorption turned on.

Predicted Results
Same as Test A, except that it must be done using the weight for O and I. As the current code does not show the total weight started, compare the total binned weight with the total binned weight in Part A.

Actual Results:
Binned weight in A = 2.692665e+05
Binned weight in B = 2.692899e+05
A/B =.999
Binned weight squared in A = 9.564345e+05
Binned weight squared in B = 9.565178e+04
(A-B)/(sqrt(wtsqA+wtsqB)) = -0.0228

 

Part C: Same as Test A except turn on coherent scatter.

Predicted Results
O = I(exp(-2mu))
O = I(exp(-2*02.64424) )
O = I * 0.005049

Actual Results:
O = 1000000*.005049= 50490
Simulated (S) = 50173
S/O = .994
(S-O)/(sqrt(O)) = -1.41


Test 2: Test the scatter modeling in the planar detector module "DetPlanar".

Method: Verify that the expected number of photons pass through the object without interacting.

Part A: Simulate without any importance sampling or coherent scatter.

Simulate:
1,000,000 decays
No Importance Sampling
511 keV Photons, minimum = 511
Single photon
Coherent scatter off
Fixed Direction = true

Object:
Point source at (0,0,0) in air
z-axis = 1cm
No collimator.

Detector Parameters:
Planar
Forced Interaction = false
inner radius = 25 cm
tranaxial length = 10 cm
axial length = 1 cm
number views = 1 at 0 degrees
num layers = 1
layer is active = true
layer material = NaI (9)
layer depth = 2 cm

Binning:
1 bin, bin all photons which haven't scattered (or been absorbed, but they don't reach the binning module anyway). Or, if you have it in DetPlanar, you can use the number of photons that pass through without interacting.

Predicted Results

O = I(exp(-2mu*((1&endash; probScatter) + (probScatter * comptonToScatterProb) ) ))
O = I(exp(-2*0.34107*((1&endash; 0.80093) + (0.80093*0.93434)) ) )
O = I(exp(-2(0.34107*0.94741)))
O = I * 0.52400
Where O is the number of photons started and I is the number of photons binned. This can also be done using the weights.

Actual Results:

O = 999784 *.524 = 523887
Simulated (S) = 523710
S/O = .999
(S-O)/(sqrt(O)) = -0.244


Part B: Same as Test A except turn on coherent scatter.

Predicted Results
O = I(exp(-2mu))
O = I(exp(-2*0.34107) )
O = I * 0.50553

Actual Results:
O = 999784 *.50553 = 505422
Simulated (S) = 505129
S/O = .999
(S-O)/(sqrt(O)) = -0.41


Test 3: Test the scatter modeling in the PET collimator module. Scatter is not modeled in the SPECT collimator module.

Part A

Simulate:
1,000,000 decays
No Importance Sampling
511 keV Photons, minimum = 511
PET
Coherent scatter off
Fixed Direction = true

Object:
Point source at (0,0,0) in air
z-axis = 1cm

Collimator Parameters
Monte Carlo PET
1 layer, 1 segment
parallel
inner radius = 25 cm
outer radius = 27 cm
material = NaI
min z = 0.5 cm
max z = 0.5 cm

No detector

Binning:
1 bin, bin all coincidences which haven't scattered (or been absorbed, but they don't reach the binning module anyway).

Predicted Results
Note that the following result includes a square, unlike the single photon result.
O = I(exp(-2mu*((1&endash; probScatter) + (probScatter * comptonToScatterProb) ) ))2
O = I(exp(-2*0.34107*((1&endash; 0.80093) + (0.80093*0.93434)) ) )2
O = I(exp(-2(0.34107*0.94741)))2
O = I * 0.524002
Where O is the number decays started and I is the number of coincidences binned. This can also be done using the weights.

Actual Results:
O = 999557 *.0.524002 = 274454
Simulated (S) = 274527
(S-O)/(sqrt(O)) = 0.14


Part B: Same as Test A except turn on coherent scatter.

Predicted Results
O = I(exp(-2mu)) 2
O = I(exp(-2*0.34107) ) 2
O = I * 0.505532

Actual Results:
O = 999568*0.505532 = 255450
Simulated (S) = 255446
(S-O)/(sqrt(O)) = -0.008


Test 4: Verify the changes have not created bugs in the Stratification and Forced Detection routines.

Method: Compare results from simulations with and without importance sampling.

Part A: Non-importance sampling run.

Simulate:
10,000,000 decays
No Forced Detection or Stratification
Forced NonAbsorption = True
140 keV Photons, minimum = 100
Single photon
Coherent scatter off
Turn off Fixed Direction!

Object:
Point source at (0.5,0.5,0) in a 2 cm diameter cylinder of NaI
(You can do this by making the object 2x2 pixels, maxX = maxY = 1, minX=minY=-1, and then put point source activity in just one voxel.)
z-axis = 1cm

No detector
No collimator.

Target cylinder:
1 cm radius, 1 cm z-axis

Binning:
10 d bins, from&endash;1 to 1.
30 a bins.
Bin all photons with energy >= 100 keV.
Bin counts, weight, and weight squared.

Results

Compare weights with those in part B.


Part B: Importance Sampling run.

Simulate

Same as Part A, except with Stratification (do a preliminary run to get an input productivity file) and Forced Detection.

Results

Comparison with Part A using a t-test.

Number of bins checked = 300
Number of valid bins in set 'A' = 44
Number of valid bins in set 'B' = 44
Number of valid pairs found = 44
Number of pairs with one valid and one too small = 0
Number of pairs with both > 0 but < min = 0
Number of pairs with both zero = 256

Do you want to continue (Yes..No) [Yes]:
Num pairs with valid weights = 44
Minimum T-Test value = -1.900
Maximum T-Test value = 1.749

0.0 <= |T-Test| <= 1.0 = 31 => %70.45
0.0 <= |T-Test| <= 2.0 = 44 => %100.00
0.0 <= |T-Test| <= 3.0 = 44 => %100.00
0.0 <= |T-Test| <= 1.90 = 44, => %100.00

NOTE: This test was run with all variations of importance sampling combinations, (stratification, forced detection, forced non-absorption), and all variations pass the t-test criteria.

 


Test 5: Test the distribution of Compton and coherent scattered photons in the PHG.

Method: It is a modified version of Test 2 from the original coherent scatter testing document. It uses a very small, extremely thin object to minimize problems with attenuation of photons after they scatter. A new material is created to minimize the attenuation of Compton scattered photons.

The predicted results are calculated as if there were no attenuation of scattered photons. However, coherent scattered photons will undergo some attenuation. Thus, if we run too many photons we will see a statistically significant difference between predicted and simulated results. I have tried to design this test so that the systematic discretization errors will be small compared to the stochastic errors. However, if the results are visually very close, but look a little too far off numerically, the tests may be rerun with the object dimensions to 1/10 of that given here and the number of events simulated increased by a factor of 10. If the original error was stochastic, these changes should result in a closer fit.

Simulate:
500,000,000 decays
No Forced Detection or Stratification
Forced NonAbsorption = False
140 keV Photons, minimum = 100
Single photon
Coherent scatter on

Debug switch:
Turn fixed direction on (-d 1).

Object:
Point source at (-0.001,0,0) in a 0.003 cm diameter cylinder of halfnhalf (a not-yet-discovered material described below). (This can be realized with a 3x3 object, xmin=ymin=-0.0015, xmax=ymax=0.0015, with activity in the pixel containing the desired point source.)
z-axis = 0.00000001cm
You might want to create a special activity translation table for this simulation with values 10,000 times higher so that the numbers don't end up being too tiny. (Don't modify the regular table-that caused some problems last time!)

Target cylinder

10 cm radius, 0.02 cm z-axis

New attenuation material:

Make a special activity table that includes halfnhalf, which has the following attenuation table:
5 0.000001 0.000001 0.5
...
139 0.000001 0.000001 0.5
140 1000.0 0.999999 0.5
141 0.000001 0.000001 0.5
....
Notice only 140 keV has significant attenuation.
halfnhalf should be given the same angular distribution table (halfnhalf.ad) as lead.

No collimator
No detector

Binning:
1 scatter photons only.
2 energy bins, 100-139.999 and 139.999 to 179.998 (Compton scatter will appear in the lower bin, coherent scatter in the upper)
1 d bin, from&endash;0.1 to 0.1.
360 a bins.
1 z bin from -0.01 to 0.01.
Bin all photons with energy >= 100 keV.
Bin counts, weight, and weight squared.

Results

The coherent data is used to derive an estimate of the distribution of scatter angles. This is then compared to the distribution from the table SimSET used to choose the coherent scatter angles.

The two are in good agreement (see Figure 2).

 


The Compton data is used to derive an estimate of the relative probability of scattering into an angle. The Klein-Nishina formula (C.D. Zerby, "Response of gamma-ray scintillation counters", in Methods of Computational Physics, vol. 1, Academic Press, NY, pp. 110--this is the reference we used to implement the Klein-Nishina formula) is used to derive a similar estimate. These two estimates are compared in Figure 3. They are in good agreement.

 

This spreadsheet compares the results of tests 5 a,b,c. Tests 5b and 5c duplicate the test 5a simulation except that coherent scatter is not simulated, only Compton scatter is simulated. Test 5b uses forced detection, test 5c does not. The Compton scatter distributions from the three tests ought to be the same within the statistical uncertainty of the data.

The columns of data to the right culminate with t-tests which show the three tests give statistically similar distributions. Figure 4, below, shows the similarity of the three distributions.

 

[top of page]