Where exactly is the 2.05 MJ measured?
It's the amount of UV light entering the target chamber. The reason for this particular measure is primarily historical.
Note that LLNL has, on occasion, redefined the location of the measurement to make results look better. For instance, in 2013 they claimed scientific breakeven by redefining the input energy as "the energy deposited in the fuel", which removed several additional very significant losses along the line so that it was something like 10 kJ of input compared to the ~2 MJ of UV. So your question does need to be asked, every time.
So, why this measure and not some other, like the 4 MJ of actual IR the laser produces? Well, this goes way back to when the laser approach (ICF) was introduced. At that time, the vast majority of fusion devices used magnets to hold the plasma (MCF) and in many of those machines, provide the heating as well. For instance, in the early ZETA machine, the heating of the fuel was accomplished by passing a current through the fuel, which compressed it and heated it directly, a very efficient process. In these cases, the difference between "all the energy in" and "the energy into the plasma" is not that much different.
Well, it turns out those approaches didn't work. You simply can't compress a plasma the needed amount, it goes wonky on you, every time. If we can't compress it for heating, then we some form of external heating. In modern machines, that's almost always some combination of radio frequency heaters (basically fancy microwave oven parts) and direct injection of hot fuel atoms using particle accelerators. Neither of those is super-efficient, so suddenly the amount of energy going into the system is no longer very close to the total input, those heaters are somewhere around 30 to 50% efficient. And as all of this continued, we realized that we were going to need to use superconducting magnets too, so that starts changing everything as well - the magnets no longer use much energy but the entire system of cryogenics sure does!
So now consider two reactors that both have the same total fusion output. One uses efficient heaters and superconducting magnets and makes 10 MJ of fusion output. The other uses old heaters and copper magnets and makes 20 MJ of fusion. Which is "better"? Well for most researchers, better means better plasma performance, so we don't want to have to consider all those other losses, what we want to know is "for a given amount of input energy, what amount of fusion do we get out?" And so for that comparison, we ignore all the inefficiencies upstream, the magnets, the heaters etc, and just measure the total amount of heat into and out of the plasma.
So we had this industry-wide definition of Q that was "energy into the plasma vs. energy out of the plasma".
When ICF came along it underwent a similar set of changes. Originally the calculations suggested that a 10 kJ laser would do the trick, and we would get 10 kJ or so back out. Yes, that ignores all the energy wasted in the lasers, but it sort of gets you the same thing as in the MCF case, the energy into the fuel vs the fusion out.
But when they actually tried that, it didn't work. One of the big reasons is that the IR preferentially accelerated electrons which killed the implosion, so that's when they started adding the upconversion. Now you might say we should draw the line on the other side of the converter, that the 4 MJ is the "real" amount of laser, but consider what would happen if someone suddenly invented a direct-to-UV 2 MJ laser. It would provide exactly the same amount of input to the fuel, and (we assume) the same output from the fuel, which is the real thing people are trying to measure. Sure it might use half as much electricity, but that's not telling us anything about the physics we're actually interested in - lasers are so passe after all.
So we are left with a measure that really only tells us something very specific of interest only to certain people. And that is why things like "Q engineering" and similar terms have come along.