The $\gtrsim$ symbol is there because of popsci simplification.
The precise statement of the principle is
$$\sigma_x \sigma_p \geq \frac{\hbar}{2}$$
where $\sigma_x$ is the standard deviation of the results we would get by a position measurement, and $\sigma_p$ is the standard deviation of the results we would get by a momentum measurement.
However, in a popular-level presentation, the standard deviation would be too technical to introduce, and the technicalities wouldn't be relevant to the conceptual content anyway. So the standard deviations are replaced by "uncertainties" or "spreads" $\Delta x$. The term "uncertainty" in these discussions is not a technical one and it doesn't have a strict definition. Some authors take it to mean the range of the distribution, some take it to be half the range, and so on.
Since $\Delta x$ isn't usually well-defined in the first place, the best you can say is a statement like
$$\Delta x \Delta p \gtrsim \hbar$$
where $\gtrsim$ vaguely means "probably greater than, but possibly smaller by up to an order of magnitude or so but no more". That's good enough because people rarely use the uncertainty principle quantitatively; even practicing physicists just invoke it for intuition and don't care about the constants.
Penrose's book is straddling the line between being pop science and a textbook, which is probably why he included the factor of $1/2$, even though "$\gtrsim$" inherently means you're not working to that accuracy in the first place.