3

My understanding of the limitation of radiometric dating is that background radiation swamps the radiation from C14 once the remaining atoms get few enough in number. Accelerator mass spectrometry seems to actually count every atom in the sample, meaning background radiation doesn't matter. Yet the advantage of AMS dating stated here is "can use smaller sample size", not "can give dates much farther into the past using the same sample size". Are the machines too expensive to build one that can test larger samples?

Noumenon
  • 133

2 Answers2

4

In the end, the ability to accurately count small amounts of sample only gets you logarithmic gains in how far back you can date materials, due to the exponential nature of radioactive decay.

Suppose you have a material with mass $m$, a mass fraction $\chi_\mathrm{C/tot}$ of which is carbon. Then a fraction $\chi_\mathrm{C-14/C} = 10^{-12}$ of its carbon will initially be C-14. We can convert masses to atom counts with the factor $N_A/M$, where $N_A = 6\times10^{23}\ \mathrm{atoms/mol}$ is Avogadro's number and $M = 12\ \mathrm{g/mol}$ is the molar mass of carbon. Suppose also that your detection method is only reliable with a minimum of $N_\mathrm{min}$ C-14 atoms. Then the maximum age you can date is $$ T_\mathrm{max} = \tau \log_2\left(\frac{N_A\chi_\mathrm{C/tot}\chi_\mathrm{C-14/C}m}{MN_\mathrm{min}}\right), $$ where $\tau = 5700\ \mathrm{years}$ is the half-life of C-14.

If you decrease $N_\mathrm{min}$ by a factor of $500$ (which seems to be the claim in the link), you increase $T_\mathrm{max}$ by $9$ half-lives, or $50{,}000\ \mathrm{years}$. This isn't too impressive, and the returns are only diminishing. Not only that, but here is a hard limit on how small $N_\mathrm{min}$ can be (you can't detect fractions of an atom!). Certainly by the time $N_\mathrm{min}$ is $10$, the results are statistically unreliable even if your counting is perfect. For $N_\mathrm{min} = 1$, the results are garbage.

On the other hand, decreasing $N_\mathrm{min}$ by $500$ allows you to get the same $T_\mathrm{max}$ with $1/500$ the sample size $m$. This allows for far more things to be tested.

If you want to date something older, you should probably be looking for a different isotope, not a more accurate counting device.

0

After accepting an answer, I found an answer with a more experimental focus at talkorigins.org.

Why would the instrument reading be noisy if it counts every atom?

The second contribution, laboratory contamination, is largely due to sample chemistry (pretreatment, hydrolysis or combustion to CO2, and reduction to graphite), which generally introduces a small amount of modern carbon, typically at least 1 microgram [8, 12, 13, 14]. Thus a 1 mg sample of infinitely old carbon would measure at least 0.1 pMC (percent modern carbon) before background subtraction...

The third contribution, instrument background, has a number of sources. The main sources are generally the following:

  1. ion source “memory” of previous samples, due to radiocarbon sticking to the walls of the ion source, thermally desorbing, and then sticking to another sample
  2. mass spectrometer background, non-radiocarbon ions that are misidentified as radiocarbon, sometimes through unexpected mechanisms [16]
  3. detector background, including cosmic rays and electronics noise

Why can't the sample size be bigger?

The maximum allowed sample size is typically about 10 mg of carbon. Larger samples produce excessive CO2 pressure in the sealed tubes used in the process, causing tubes to explode and samples to be lost. Thus, even if larger samples like RATE’s “on the order of 100 mg” [6] are submitted to an AMS laboratory, only about 1 mg of carbon will actually undergo analysis. Though Baumgardner calls a 1 mg sample “tiny” [6], it is generally considered “large” by AMS laboratories [e.g., 5, 7, 8], with enough carbon to provide ion source current for about a day.

Noumenon
  • 133