Epistemic uncertainty is uncertainty that arises from a lack of knowledge, for instance in machine learning epistemic uncertainty can be caused by a lack of training data. Estimating epistemic uncertainty is important for useful AI systems, since it allows the AI to "know that it doesn't know", therefore avoiding hallucinations.
While estimating epistemic uncertainty in machine learning classifiers has a clear interpretation, when considering generative models tasked with text generation it is less clear how to evaluate uncertainty, since many text completions can be considered satisfactory. Yet, it is obvious that a good epistemic uncertainty estimator should return a high value when a modest AI model is asked for example to "Solve the Riemann hypothesis" (hard unsolved math problem).
What are the leading methods to estimate Epistemic Uncertainty in Large Language Models?