8

The recent fusion experiment at the National Ignition Facility delivered "2.05 megajoules (MJ) of energy to the target, resulting in 3.15 MJ of fusion energy output." But where exactly is the input energy measured?

My crude understanding is that the input energy starts out as an electric charge in a bank of capaciters, is converted into incoherent Xenon flashlight photons, becomes coherent infrared laser photons, is upconverted to coherent ultraviolet photons, and is finally converted to (incoherent?) Xray photons.

Where exactly is the 2.05 MJ measured? They are obviously not talking about the first two stages, but is it IR or UV or even Xrays? And where in the system is it measured? There are obviously a lot of losses associated with beamforming and switching, etc.

There has been some discussion about how the neutron flux is measured and the output energy is estimated, but I've not seen anything on how the input energy is estimated.

Roger Wood
  • 2,423
  • 8
  • 21

3 Answers3

7

Where exactly is the 2.05 MJ measured?

It's the amount of UV light entering the target chamber. The reason for this particular measure is primarily historical.

Note that LLNL has, on occasion, redefined the location of the measurement to make results look better. For instance, in 2013 they claimed scientific breakeven by redefining the input energy as "the energy deposited in the fuel", which removed several additional very significant losses along the line so that it was something like 10 kJ of input compared to the ~2 MJ of UV. So your question does need to be asked, every time.

So, why this measure and not some other, like the 4 MJ of actual IR the laser produces? Well, this goes way back to when the laser approach (ICF) was introduced. At that time, the vast majority of fusion devices used magnets to hold the plasma (MCF) and in many of those machines, provide the heating as well. For instance, in the early ZETA machine, the heating of the fuel was accomplished by passing a current through the fuel, which compressed it and heated it directly, a very efficient process. In these cases, the difference between "all the energy in" and "the energy into the plasma" is not that much different.

Well, it turns out those approaches didn't work. You simply can't compress a plasma the needed amount, it goes wonky on you, every time. If we can't compress it for heating, then we some form of external heating. In modern machines, that's almost always some combination of radio frequency heaters (basically fancy microwave oven parts) and direct injection of hot fuel atoms using particle accelerators. Neither of those is super-efficient, so suddenly the amount of energy going into the system is no longer very close to the total input, those heaters are somewhere around 30 to 50% efficient. And as all of this continued, we realized that we were going to need to use superconducting magnets too, so that starts changing everything as well - the magnets no longer use much energy but the entire system of cryogenics sure does!

So now consider two reactors that both have the same total fusion output. One uses efficient heaters and superconducting magnets and makes 10 MJ of fusion output. The other uses old heaters and copper magnets and makes 20 MJ of fusion. Which is "better"? Well for most researchers, better means better plasma performance, so we don't want to have to consider all those other losses, what we want to know is "for a given amount of input energy, what amount of fusion do we get out?" And so for that comparison, we ignore all the inefficiencies upstream, the magnets, the heaters etc, and just measure the total amount of heat into and out of the plasma.

So we had this industry-wide definition of Q that was "energy into the plasma vs. energy out of the plasma".

When ICF came along it underwent a similar set of changes. Originally the calculations suggested that a 10 kJ laser would do the trick, and we would get 10 kJ or so back out. Yes, that ignores all the energy wasted in the lasers, but it sort of gets you the same thing as in the MCF case, the energy into the fuel vs the fusion out.

But when they actually tried that, it didn't work. One of the big reasons is that the IR preferentially accelerated electrons which killed the implosion, so that's when they started adding the upconversion. Now you might say we should draw the line on the other side of the converter, that the 4 MJ is the "real" amount of laser, but consider what would happen if someone suddenly invented a direct-to-UV 2 MJ laser. It would provide exactly the same amount of input to the fuel, and (we assume) the same output from the fuel, which is the real thing people are trying to measure. Sure it might use half as much electricity, but that's not telling us anything about the physics we're actually interested in - lasers are so passe after all.

So we are left with a measure that really only tells us something very specific of interest only to certain people. And that is why things like "Q engineering" and similar terms have come along.

2

Nature writes: "The facility used its set of 192 lasers to deliver 2.05 megajoules of energy onto a pea-sized gold cylinder containing a frozen pellet of the hydrogen isotopes deuterium and tritium.[...] The laboratory’s analysis suggests that the reaction released some 3.15 MJ of energy ... However, although the fusion reactions produced more than 3 MJ of energy — more than was delivered to the target — NIF’s lasers consumed 322 MJ of energy in the process."

So 2.05 megajoules is the estimated energy DELIVERED by the lasers into the gold cylinder.

-2

The lasers that excite the sample are calibrated/tested ... for many lasers it's possible to use a power meter, it is a thermal device. So the measurement does not occur during the experiment ... they just know that each laser when fired delivers x watts or joules.

So number of lasers x joules = total power in.

PhysicsDave
  • 2,886