In a recent survey (https://www.newscientist.com/article/2410839-theres-a-5-chance-of-ai-causing-humans-to-go-extinct-say-scientists/) a majority of AI researchers considers a non negligible risk of human extinction due to the development of superhuman AI.
Can this probability be computed from some model, even if crude?
Risk models are used elsewhere, from the return of a long-term investment portfolio given the distribution of yearly return and some independence assumption, to the study of extinction of species in natural environments.