Cook's Theorem* says that any problem in NP can be reduced to a SAT problem.
*Cook, Stephen A. “The Complexity of Theorem-Proving Procedures.” In Proceedings of the Third Annual ACM Symposium on Theory of Computing, 151–58. STOC ’71. New York, NY, USA: Association for Computing Machinery, 1971.
Is there an analogous theorem for machine learning (ML), stating that all problems in a particular class (e.g., NP) can be solved by ML / neural networks?
In physics, I'm reminded of how mechanics equations can be reformulated as a variational problem, i.e., an optimization problem of the Lagrangian (integral of which is a sort of "loss function"). Analogously, perhaps algorithms can be reformulated as a machine learning problem (which is really just an optimization problem, too).