A typical voltmeter contains an internal Ohmic resistor with known and very high resistance $R$ (called the "input resistance" or "input impedance"), and an extremely sensitive ammeter that measures the current through that resistor. When the voltmeter is connected in parallel across some circuit elements, then ideally the internal resistor has resistance so much higher than any of the circuit elements across which the voltage is being measured, that it acts as an almost perfect insulator, and the vast majority of the current (if there is any) flows across the circuit elements to be measured. The voltmeter measures the tiny amount of current $I$ that does pass through its internal resistor, and then displays the voltage $V = IR$. Aside from calibration error in $R$ and direct measurement error of the current, the voltmeter's displayed value will become incorrect when the voltage to be measured is so high that the internal resistor's response ceases to be Ohmic.
There's a nontrivial optimization problem in how high to make the voltmeter's input resistance: the higher it is, the less current passes through the voltmeter, which makes it more difficult to measure precisely. On the other hand, you usually don't want an appreciable fraction of the total current to pass through the voltmeter, or else merely connecting the voltmeter to the circuit will actually change the circuit's behavior, which is usually not desired.
As you suggested in your question, it is impossible to measure $\Phi$ in general - even in principle - because it is a gauge-dependent quantity. A real voltmeter typically consists of two wires (that are good conductors) which connect to the circuit element, with the actual measuring apparatus in between them. For simplicity, let's idealize it to a single wire with uniform and known resistivity $\rho$, which can measure the current $J$ passing through itself at every point. Then Ohm's law gives that the electric field $\vec{E} = \rho \vec{J}$, and so the displayed "voltage" actually physically corresponds to $\int_a^b \vec{E} \cdot d\vec{l}$ along the wire, as you suggested. By Faraday's law, any time-dependent magnetic field will induce an electric current, and so the reading that the voltmeter displays will change as the wires connecting it to the circuit pass through the changing magnetic field. So if there are magnetic fields nearby that are changing appreciably over time, then the readout on the voltmeter will depend on the exact position of the wires connecting it to the circuit, and in general this quantity is not very useful.
(In a more realistic model of a voltmeter where we take into account the fact that the resistance is much, much higher for the internal resistor than for the external wires, we find that $\vec{E}$ is typically negligible inside the wires and roughly constant across the internal resistor, and so the voltmeter's reading basically just equals $\vec{E} \cdot \vec{l}$, where $\vec{l}$ is the oriented length of the internal resistor and $\vec{E}$ is the electric field at its location. In fact, most local electric field sensors have the same basic design as voltmeters: an extremely sensitive ammeter measuring across an Ohmic resistor of known length and input resistance. But in this case you want the input resistance to be really high (typically about $10^4$ times that of a voltmeter) in order to prevent any current from flowing across it, as the resulting charge buildup would partially cancel the external field.)