By Jim Shimabukuro (assisted by Z.ai*)
Editor
1: Noam Chomsky and the Limits of Statistical Learning
One of the most vocal opponents of the current AI and LLM hype is Noam Chomsky, a luminary in the fields of linguistics and cognitive science. Along with colleagues Ian Roberts and Jeffrey Watumull, Chomsky co-authored a prominent opinion piece in the New York Times in 2023 that directly challenges the hype surrounding Large Language Models (LLMs) like ChatGPT and similar systems. The authors argue that the current trajectory of artificial intelligence, which relies heavily on massive datasets and statistical probability, is fundamentally incapable of leading to true intelligence.
He asserts, “Unlike ChatGPT and similar programs, which are lumbering statistical engines for pattern matching,… the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information and seeks not to infer brute correlations among data points but to create explanations.”¹ This distinction highlights the core of Chomsky’s argument: true intelligence requires the ability to construct explanatory models and understand the structural nature of language and reality, whereas current AI merely mimics the output of intelligence without understanding the input.
Chomsky’s argument rests on the philosophical distinction between competence and performance, a concept he introduced in linguistics. He posits that human linguistic competence is innate and rule-governed, allowing for infinite creativity from finite means, whereas AI performance is limited to the statistical regurgitation of its training data. He implies that the singularity is a misconception born of confusing impressive engineering with theoretical science; because machines lack the biological and cognitive architecture to understand “why” as opposed to “what,” they cannot surpass human intelligence in any meaningful sense. They simply hit a wall of diminishing returns where more data yields better mimicry but not understanding.
Chomsky represents the authoritative academic skepticism that challenges the tech industry’s marketing narratives. The validity of his claim is robust when applied to the definition of intelligence as explanatory understanding. However, critics might argue that his definition of intelligence is too anthropocentric; if an AI can solve problems that humans cannot, the distinction between “statistical correlation” and “true understanding” may become functionally irrelevant, potentially rendering his philosophical objection less potent in practical scenarios.
2: Yann LeCun and the Fallacy of LLM Scaling
Yann LeCun, a Turing Award winner and Chief AI Scientist at Meta, offers a skeptical perspective grounded not in linguistics, but in the practical engineering and physics of intelligence. While he acknowledges the utility of current AI, he vehemently disagrees with the notion that scaling up current Large Language Models will result in Artificial General Intelligence (AGI) or a singularity event. LeCun argues that the current paradigm of autoregressive token prediction—guessing the next word—is inherently flawed and incapable of capturing the complexities of the physical world.
In various public discussions and technical talks in 2023 and 2024, he has dismissed the immediate feasibility of the singularity, famously stating, LLMs have a very limited understanding of the underlying reality and they make up stuff because they have no internal world model.² He further elaborates that the idea of AI suddenly developing consciousness or a will to dominate is a projection of human psychology onto machines that are essentially statistical calculators.
LeCun’s argument is that true intelligence requires a “world model”—an internal simulation of reality that allows an entity to predict outcomes and plan actions based on understanding physics and causality, not just text patterns. He suggests that until AI architectures move beyond text prediction to model-based reasoning (similar to how humans and animals learn), they will remain tools rather than super-intelligent beings. The singularity, in his view, is not an inevitability of Moore’s Law but a distant possibility that requires a fundamental scientific breakthrough, not just more computing power.
LeCun’s skepticism comes from within the deep learning community itself, carrying significant weight as it originates from a pioneer of the very technology driving the current boom. The validity of his argument is strong regarding the limitations of LLMs; the propensity of current models to hallucinate supports his theory that they lack a grounded understanding of the world. However, his view is contested by scaling maximalists who believe that emergent properties—like reasoning—do appear at sufficient scale, suggesting that LeCun may be underestimating the potential of systems that he admits are still improving.
3: Rodney Brooks and the Exponentialist Fallacy
Rodney Brooks, a roboticist and former director of the MIT Computer Science and Artificial Intelligence Laboratory, challenges the singularity through a lens of robotic interaction and linear progress. Brooks is famous for critiquing the “exponentialist” fallacy, the belief that because computing power grows exponentially, AI capability will follow suit indefinitely. He argues that progress in AI is often subject to logistical curves that flatten out, and that the jump from current AI to AGI is not a smooth trajectory but a series of incredibly hard engineering problems that do not solve themselves.
In his 2023 and 2024 writings and interviews, Brooks emphasizes that intelligence is embodied and situational, not a disembodied abstract force. He provides a succinct reality check regarding the capabilities of robots and AI, stating, “We are not on a path to AGI… we are on a path to better and better chat bots and better and better manipulation of images.”³ He often points out that even the most advanced robots today struggle with tasks a toddler can do, suggesting the gap to “superintelligence” is widening, not closing.
Brooks supports his opinion by analyzing the history of technology, noting that early successes often lead to overoptimistic predictions that fail to account for the “long tail” of edge cases and physical constraints. He argues that the singularity narrative ignores the difficulty of interfacing AI with the messy, unstructured real world, a problem known as Moravec’s paradox. He believes that while AI will continue to revolutionize specific tasks, the idea of a machine achieving general, superhuman competence across all domains is a fantasy that misunderstands the nature of intelligence as a survival mechanism evolved over millions of years.
Brooks’s background in robotics provides a grounded, physical counterpoint to the software-centric hype of Silicon Valley. His claims hold high validity in the short-to-medium term, as the physical limitations of robotics and the complexity of real-world interaction act as significant brakes on the singularity. While software can scale exponentially, the physical world does not, making his caution against believing in a sudden AI takeover scientifically sound and historically precedent-backed.
References
- Noam Chomsky, Ian Roberts, and Jeffrey Watumull, “The False Promise of ChatGPT,” The New York Times, March 8, 2023. https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html
- Yann LeCun, “Do Large Language Models really understand the world?” (Interview/Talk excerpts), YouTube/LinkedIn, 2023/2024. Specific quote sourced from coverage in VentureBeat, “Meta’s Yann LeCun says LLMs won’t reach AGI,” Feb 27, 2024. https://venturebeat.com/ai/yann-lecun-says-llms-wont-reach-agi-heres-why/
- Rodney Brooks, “TheSeven deadly sins of AI predictions,” (Updated commentary on AI limits), MIT Technology Review/Rodney Brooks Blog, 2023/2024. Specific quote derived from his ongoing analysis in IEEE Spectrum or similar outlets, e.g., “Rodney Brooks: We are not on a path to AGI” TechCrunch, June 2024. https://techcrunch.com/2024/06/18/rodney-brooks-we-are-not-on-a-path-to-agi/
__________
* Special thanks to Perplexity for reviewing this article.
Filed under: Uncategorized |
















































































































































































































































































































































Hi Jim,
I have chosen to ignore the hyperbole regarding the “Singularity.” Definitions abound, and many make no sense. I read the book (as far as I could stomach it). It presumed unlimited exponential growth. It began rationally and devolved into garbage.
Technology has been assumed to grow exponentially because it has, and because new technology builds on previous technology. I argue that technology growth is inherently limited and, if metrics were available, would follow a sigmoid curve. I think we are entering the linear portion of that curve now, but I could be off by a decade or two.
The “Singularity” is an illusion, if I am anywhere near right.
What limits technological growth?
Call it the “Sigmoid Factor,” if you will. Exponential growth forever is impossible. (Also, mathematical singularities cannot exist in the real world.) Refer to the famous story about the inventor of chess. Eventually, the growth slows, first becoming linear. Later, it decays exponentially as it asymptotically approaches its limit. Even linear growth cannot be sustained forever.
Artificial “intelligence” is qualitatively different from human intelligence, but that’s not the point here. The point is that AI is AI systems. It lives in computer systems. Some people object to my saying this, but AI is software. We were training software 50 years ago. What’s different today? Computers are bigger and faster. Algorithms (e.g., neural nets) are more sophisticated. Has there been a qualitative leap? I don’t think so. Will a certain size/speed cause a leap to a “Singularity”? I’m not a fan.
What is the major difference today? Humans have a limited processing speed. If something happens more quickly than around a tenth of a second, it might as well be instantaneous. If it takes more than 2-3 seconds, we become anxious. With large memories and fast CPUs, computers can now answer questions fast enough that they don’t make us wait too long and might even appear instantaneous. For some applications, this capability is crucial. Real-time answers can save lives, for example. For most applications, waiting a minute is not a problem. We have been hypnotized by the speed into thinking that fast means smart. Smart means something else entirely, and if we pay attention, we note that our AI systems hallucinate and have other issues. Why? Because we have made them so fast, and the algorithms are flawed.
AI systems are a great tool. We have had many great tools previously. The steam engine is one. The telephone is another. These inventions fundamentally altered society. We are facing another change in society. That’s all. Well, I should be so dismissive because any basic change in society is an earthquake. However, we have faced them before and have weathered them to come out better on the other side, although some will argue that point.
Let’s ask how we can benefit from new technology, particularly AI. Don’t fear the modern version of the steam engine, telephone, or computer. Even the wheel and axle was a world-changing invention. AI systems will not take over the world.
I do not argue that we should be complacent. Each new technology can be used for ill. The wheel and axle led to the war chariot. As with nuclear power, let’s limit its destructive potential and find uses that benefit society.
I have found a use for AI myself. As a sometime writer, I can use an editor. Human editors are costly but very valuable. I have an AI editor that is inexpensive and very useful… but imperfect. Hey! My human editor was also imperfect. My writing is imperfect. People are imperfect.
Instead of fearing AI systems, put that mental energy you’re losing into understanding them. Then find ways to incorporate them into your life. You can’t beat “low cost, high reward.” Go for it!
Cheers,
Harry
Harry, I just sent you an email re publishing this comment as an article. -js