1 min readfrom Machine Learning

Human-level performance via ML was *not* proven impossible with complexity theory [D]

Our take

In 2024, Van Rooij, Guest, Adolfi, Kolokolova, and Rich claimed to demonstrate that achieving human-level performance through machine learning is impossible, presenting their findings as the "Ingenia Theorem" in *Computational Brain & Behavior*. However, my recent paper reveals significant flaws in their proof, primarily due to the lack of a clear mathematical definition for "human-level classifier." This ambiguity undermines their argument and leads to misleading conclusions, similar to past attempts by thinkers like Penrose and Chomsky.

The recent claims made by Van Rooij, Guest, Adolfi, Kolokolova, and Rich regarding the impossibility of achieving Artificial General Intelligence (AGI) through machine learning (ML) have sparked considerable discussion within the AI community. Their assertion, presented in the form of the "Ingenia Theorem," seeks to demonstrate that learning a human-level classifier from data is fundamentally unattainable. This is a bold claim, especially given the rapid advancements in AI technologies that continuously challenge our understanding of what is possible. As noted in a recent article, the proof provided by these authors has been challenged by another scholar, who argues that it is fundamentally flawed due to the lack of a clear mathematical definition of a "human-level classifier." This raises important questions about the rigor of theoretical claims in AI and the implications for the field as a whole.

The critique of the Ingenia Theorem highlights a crucial aspect of research in AI: the need for clarity in definitions and the robustness of methodologies. The critic points out that the authors failed to define "human-level classifier" in a mathematically rigorous way, which undermines the validity of their conclusions. Instead of focusing on a well-defined construct, the proof transitions to a generalized statement involving "polytime-sampleable distributions," which can lead to misleading implications. Such a shift not only obscures the original intent of the theorem but also risks casting doubt on other important areas of machine learning research, similar to the discussions surrounding other prominent theorists like Penrose and Chomsky. The implications of this are significant, as clarity in foundational concepts is vital for the continued evolution of AI technologies.

As our readers might be aware, this discussion intersects with broader trends in the AI landscape, such as the ongoing evolution of multi-agent systems, as explored in our article on What I Learned Building Multi-Agent Systems From Scratch. The ability to define and understand complex systems is paramount for innovation, and the challenges posed by theoretical barriers can slow down progress. Furthermore, recent advancements, like those outlined in the article on JEP 533 Tightens Exception Handling in Java's Structured Concurrency for JDK 27, illustrate how refining foundational concepts can lead to more robust and effective technology.

The debate surrounding the Ingenia Theorem serves as a reminder that while theoretical research is essential, it must be grounded in practical applicability and clear definitions. As the field of AI continues to advance, it is imperative that researchers and practitioners maintain a focus on clarity and rigor to ensure that innovations are built on solid foundations. This is particularly important as we navigate the complexities of human-level intelligence and its implications for society.

Looking ahead, this discourse prompts us to consider: How can we ensure that theoretical advancements in AI remain relevant and applicable in practical contexts? As the boundaries of what is possible continue to expand, it will be crucial for the community to engage critically with theoretical claims and to prioritize definitions that facilitate understanding and innovation. The evolution of AI will depend not just on technological advancements but also on our ability to articulate and comprehend the foundational principles that guide our exploration of intelligence.

Van Rooij, Guest, Adolfi, Kolokolova, and Rich claimed to have proven that AGI via ML is impossible in Computational Brain & Behavior in 2024. The basic idea was to try to reduce a known NP-hard problem to the problem of learning a human-level classifier from data. The purported result, called "Ingenia Theorem" by the authors, made some noise on the internet, including here.

My paper showing that the proof is irreparably broken is now also out in CBB (ungated preprint here).

The basic issue is that "human-level classifier" is not mathematically defined, which the authors solve by ... never defining it. They have a construct that corresponds to "distribution of human situation-behaviour tuples" when they introduce the problem, but the construct then gets swapped out for "for all polytime-sampleable distributions" when it comes time to doing the formal proof. This means that the paper, if you find-and-replace human situation-behavior tuples for ImageNet inputs/labels, also proves that learning to classify ImageNet is intractable.

Blogpost discussion similar attempts from Penrose to Chomsky here.

submitted by /u/mike_uoftdcs
[link] [comments]

Read on the original site

Open the publisher's page for the full experience

View original article

Tagged with

#generative AI for data analysis#Excel alternatives for data analysis#natural language processing for spreadsheets#rows.com#enterprise-level spreadsheet solutions#machine learning in spreadsheet applications#real-time data collaboration#big data performance#big data management in spreadsheets#conversational data analysis#intelligent data visualization#real-time collaboration#data visualization tools#enterprise data management#data analysis tools#data cleaning solutions#financial modeling with spreadsheets#human-level classifier#AGI#machine learning