I'm working on a project related to machine Q&A, using the SQuAD dataset. I've implemented a neural-net solution for finding answers in the provided context paragraph, but the system (obviously) struggles when given questions that are unanswerable from the context. It usually produces answers that are nonsensical and of the wrong entity type.
Is there any existing research in telling whether or not a question is answerable using the info in a context paragraph? Or whether a generated answer is valid? I considered textual entailment but it doesn't seem to be exactly what I'm looking for (though maybe I'm wrong about that?)