No.
The NFL theorem is widely misunderstood and over-cited. It almost never applies to real-world problems, because real-world problems have a lot of structure. The real world is not pure chaos sampled uniformly at random from the space of all possibilities. To quote [1]:
Such a world would be hostile to inductive reasoning. The assumption that labelings are drawn uniformly ensures that training data is uninformative about unseen samples.
In contrast to this dismal outlook on machine learning, naturally occurring data involve structure that could be shared even across seemingly disparate problems. If we can design learning algorithms with inductive biases that are aligned with this structure, then we may hope to perform inference on a wide range of problems.
This can be formalized with ideas from algorithmic information theory like Kolmogorov complexity and algorithmic probability. Quoting [1] again:
While virtually all uniformly sampled datasets have high complexity, real-world problems disproportionately generate low-complexity data, and we argue that neural network models share this same preference, formalized using Kolmogorov complexity.
Notably, we show that architectures designed for a particular domain, such as computer vision, can compress datasets on a variety of seemingly unrelated domains.
Our experiments show that pre-trained and even randomly initialized language models prefer to generate low-complexity sequences.
Whereas no free lunch theorems seemingly indicate that individual problems require specialized learners, we explain how tasks that often require human intervention such as picking an appropriately sized model when labeled data is scarce or plentiful can be automated into a single learning algorithm.
These observations justify the trend in deep learning of unifying seemingly disparate problems with an increasingly small set of machine learning models.
References:
- The no free lunch theorem, Kolmogorov complexity, and the role of inductive biases in machine learning. ICML 2024 spotlight.