This is the exact reason why we do statistical hypothesis confidence testing. In essence, the confidence interval we get from this test is a quantitative measure of "how far we've converged". For example, consider an experiment to test whether a coin is imbalanced or not. Our null hypothesis is that it is not: in symbols, our hypothesis is ${\rm Pr}(H) = \frac{1}{2}$: the probability of a head is a half.
Now we work out from the binomial distribution the limits on the number of heads you will see in an experiment with $N$ tosses, given the null hypothesis and check whether the observed number falls within it. The interval wherein the observed number of heads falls with a probability of, say 0.999, is then calculated: for small $N$, you'll need to calculate this brute force with the binomial distribution. As $N$ gets bigger, we use Stirling's approximation to the factorial, which shows that the binomial distribution becomes the normal distribution. Your 0.999 confidence interval, as a proportion of $N$, gets smaller and smaller as $N\to\infty$, and these calculations are exactly what you use to see how fast it does so.
I like to call the law of large numbers the "law of pointier and pointier distributions" because this aspect of the convergence shows us why a weak form of the second law of thermodynamics is true, as I discuss in the linked answer: the law of large numbers says that in the large number limit, there are samples that look almost like the maximum likelihood sample, and almost nothing else. In different words: there are microstates which look almost exactly the same as the maximum entropy ones, and almost nothing else. Therefore, almost certainly, a system will be found near its maximum entropy microstate, and, if by chance a system is found at one of the seldom, significantly-less-than-maximum entropy states, it will almost certainly progress towards the maximum entropy microstate, just from a "random walk".