4.8 Article

Learnability can be undecidable

Journal

NATURE MACHINE INTELLIGENCE
Volume 1, Issue 1, Pages 44-48

Publisher

SPRINGERNATURE
DOI: 10.1038/s42256-018-0002-3

Keywords

-

Funding

  1. Simons Institute for the Theory of Computing
  2. Israel Science Foundation (ISF grant) [552/16, 1162/15]
  3. Len Blavatnik and the Blavatnik Family foundation
  4. NSF [CCF-1412958]

Ask authors/readers for more resources

The mathematical foundations of machine learning play a key role in the development of the field. They improve our understanding and provide tools for designing new learning paradigms. The advantages of mathematics, however, sometimes come with a cost. Godel and Cohen showed, in a nutshell, that not everything is provable. Here we show that machine learning shares this fate. We describe simple scenarios where learnability cannot be proved nor refuted using the standard axioms of mathematics. Our proof is based on the fact the continuum hypothesis cannot be proved nor refuted. We show that, in some cases, a solution to the 'estimating the maximum' problem is equivalent to the continuum hypothesis. The main idea is to prove an equivalence between learnability and compression. Not all mathematical questions can be resolved, according to Godel's famous incompleteness theorems. It turns out that machine learning can be vulnerable to undecidability too, as is illustrated with an example problem where learnability cannot be proved nor refuted.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available