Answer to Question #11163 Submitted to "Ask the Experts"
Category: Instrumentation and Measurements — Instrument Calibration (IC)
The following question was answered by an expert in the appropriate field:
For a thermoluminescence (TL) phosphor, how do I determine minimum detectable dose (MDD)? A sensitive TL and a not-so-sensitive TL phosphor should have different MDDs. The literature says that the MDD should be determined from three standard deviations of readings obtained from zero dose annealed phosphors. In this way, there will be no difference in the factor used to convert readings from sensitive and not-so-sensitive TL phosphor.
If you are starting with a batch of thermoluminscent dosimeters (TLDs) of the same phosphor, it is true that they may vary significantly in sensitivity. It is fairly common practice to sort the dosimeters in order to select dosimeters that have acceptably similar sensitivities. Depending on your specific requirements, you can establish whatever criteria you wish. For example, you may choose to select dosimeters that exhibit a relative standard deviation of no more than 5% of the mean. This is based on irradiating a fairly large group of dosimeters all to the same dose and then regrouping them according to sensitivity and evaluating the standard deviations associated with respective groups. To improve accuracy of interpreted doses, it is also rather common to determine a calibration factor for each dosimeter (e.g., microsieverts per nanocoulomb) in order to obtain the most accurate dose interpretation when a given dosimeter is used.
Once you have selected the dosimeters, the usual procedure is to put them through the preirradiation anneal procedure that brings the dosimeters back to a typical zero dose "background" level. Reading a group (typically at least 10) of these background dosimeters will yield a mean and standard deviation. The value of three standard deviations is commonly used to define the MDD after the conversion from machine output units to dose through the appropriate conversion factor (using the individual dosimeter calibration factors if you decide to go that route).
Some users prefer to specify the MDD, based on a particular level of uncertainty at the 95% confidence level. For example, you may specify that you require that the MDD exhibit no more than 15% random uncertainty at the 95% confidence level (i.e., the two sigma value divided by the MDD is no larger than 0.15). In such cases the MDD may be considerably higher than that determined on the basis of the value of three standard deviations.
If you do not go through the process of preselecting the dosimeters to reduce the variability, and you do not use individual dosimeter calibration factors, there would be, as you imply, a single possibly quite uncertain conversion factor to get from machine output to dose that applies to every dosimeter. Depending on how much variability in response existed among the dosimeters, this could lead to appreciable uncertainty in the interpreted doses.
George Chabot, PhD, CHP