A Study on AI - Part 1: The Duration of the AI Industry
Three "dark clouds" over AI
In my research on the AI industry this year, I am primarily focusing on three “dark clouds” over AI:
The Possibility of Technological Iteration: How long-term of a business is AI?
Is AI a Bubble? If so, how big could it get?
What Markets is AI Potentially Penetrating? How large is the AI market?
Today, I will first share my thoughts on the possibility of AI’s technological iteration, starting with a look at the history of AI.
AI’s Two Paradigm Shifts and Two Winters
From the 1950s to the present, the core ideology of AI technology has undergone at least two fundamental logical shifts, each accompanied by a harsh “AI Winter.”
First Wave: The Dawn and Winter of Connectionism (1943-1980)
Rise (1943-1969): The dream of AI originated with “Connectionism,” an approach that attempted to build intelligence by mimicking the human nervous system. From the proposal of a model of neurons in 1943, to the Dartmouth Workshop in 1956 which officially coined the term “Artificial Intelligence,” and then to the birth of the “Perceptron” in 1957 which could learn from data, the entire field was filled with optimistic enthusiasm.
Winter (1969-1980): However, in 1969, Marvin Minsky’s book Perceptrons unsparingly pointed out the fundamental limitations of single-layer perceptrons. The publication of this book, coupled with the inability of contemporary technology to solve more complex problems, led to unfulfilled promises. Consequently, governments and corporations drastically cut funding, and AI entered its first winter. The early research and commercial ventures born from this ideal withered away.
Second Wave: The Rise and Winter of Symbolism (1980-2000s)
Rise (1980-1987): After the first winter, AI turned to another path—”Symbolism,” also known as Expert Systems. This concept involved encoding the knowledge of human experts into sets of rules and facts, enabling computers to perform reasoning based on this logic. In the 1980s, companies like Lisp Machines and Thinking Machines emerged, and “Knowledge Engineer” became the hottest profession of the time.
Winter (1987-2000s): As Yann LeCun reminds us, history is strikingly similar. Although expert systems were powerful in specific domains, their knowledge bases were difficult to maintain, costly, and unable to handle new situations. In 1987, the collapse of the LISP machine market signaled the arrival of the second AI winter. Those once-dominant expert system companies subsequently became relics of their time.
This historical pattern is very clear: Each winter eliminated companies built on the old technological paradigm, and each spring gave rise to companies that had mastered the next-generation technology.
The Third Wave: The Golden Age of Deep Learning and the Transformer
From the ruins, Connectionism was reborn in a more powerful form. The backpropagation algorithm of 1986 and Yann LeCun’s Convolutional Neural Network (CNN) in 1989 made it possible to process complex data. Then, in 2017, the Transformer architecture, published in the paper Attention Is All You Need, completely ignited the AI revolution we are in today.
The commercial giants of this era, such as DeepMind, OpenAI, and SenseTime, all stand on the shoulders of deep learning and the Transformer.
As history brings us to this point, we must ask: Will this time be different?
Questions from the Giants: Is the Transformer the Endgame, or Just Another Cycle?
Despite the enormous success of the Transformer architecture, three of AI’s leading scholars—Yann LeCun, Rich Sutton, and Fei-Fei Li—have all independently pointed out the potential limitations of the current approach.
Yann LeCun:
“The difficult process that we see now of deploying AI systems is not new. It has happened at all times. This is why, and perhaps some of your listeners are too young to remember this, but there was a giant wave of interest in AI in the early 1980s, around expert systems. You know, the hottest job of the 1980s was going to be knowledge engineer, where your job would be to sit next to an expert and then distill the knowledge of the expert into rules and facts and then feed that into an inference engine that will be able to deduce new facts and answer questions and all that.
This is always a danger with AI. I mean, the signals are clear that LLMs, for all their fancy capabilities, still play an essential role, at least in information retrieval.”
Rich Sutton:
“To me, having a goal is the essence of intelligence. If something can achieve goals, it’s intelligent. I like John McCarthy’s definition, which is that intelligence is the computational part of the ability to achieve goals. You have to have goals, or you’re just a behavioral system. You’re not something special, you’re not intelligent.
It’s a very different thing to build a model of the physical world and to derive the consequences of a mathematical hypothesis or operation. The empirical world has to be acquired by learning. You have to learn the consequences. And mathematics is more computational, it’s more like standard planning. There, they can have the goal of finding a proof, and they are given, in a sense, the goal of finding a proof.
Why would we need a whole new architecture to start doing experiential, continual learning? Why can’t we just start doing it from a large language model?
In every case of the ‘bitter lesson,’ you could start with human knowledge and then do the scalable things. It’s always been the case. There was never any reason to say that had to be bad. But in fact, in practice, it has always turned out to be bad.”
Fei-Fei Li:
“I choose to view AI through the lens of visual intelligence because humans are deeply visual animals. We can talk more about this later, but a huge amount of our intelligence is built on visual perception, spatial understanding, not just language alone. I think they are complementary.
Today you take a model, have it watch a video of a couple of office rooms, and ask the model to count the number of chairs. This is something a preschooler, or maybe a young elementary school student, can do. But AI cannot. So there’s a lot of things AI cannot do today.
Spatial intelligence is deeper than just creating that flat 2D world. To me, spatial intelligence is the ability to create, to reason, to interact, to understand the deep spatial world, whether it’s 2D, 3D, or 4D, including dynamics and so on. So World Labs focuses on that.
We were just talking about 20 watts per brain, roughly. So from that point of view, it’s a small number, but it’s actually incredible, hundreds of millions of years of evolution that gave us these capabilities.”
Two Hypotheses for the Future: More Tokens or a New Paradigm?
Synthesizing the lessons of history and the perspectives of these giants, we can propose two distinct hypotheses for the future of AI:
Hypothesis 1: The Evolutionary Path — Requiring More Tokens via Real-World Data
This is a relatively optimistic view. Even if a truly thinking model emerges—one that incorporates vision, hearing, and touch to become a “Robotic AI” or “Physical AI”—the amount of data (Tokens) it would need to process to understand and operate in the complex physical world would grow exponentially. If this is the case, the current demand for GPUs and data centers will persist, and there will be no fundamental paradigm shift.
Hypothesis 2: The Revolutionary Path — “Thinking” AI from a New Architecture
This is a more disruptive possibility. If Yann LeCun, Sutton, and Li are correct, then a truly “thinking” AI that possesses goals and understands the physical world may require a new, highly efficient underlying architecture. This new architecture might no longer be centered on “token prediction” but could operate more like a biological brain. In such a paradigm, our reliance on Tokens could be significantly reduced, or even replaced by other concepts.
This is the first dark cloud over AI: Is AI an industry for a decade, or for a century?
Looking back at the history of AI, we see a path of spiral ascent, filled with creative destruction. Each technological winter cleared the way for the next, more powerful technological explosion.
Today, we stand on the peak of the Transformer, enjoying an unprecedented view. But the pendulum of history and the foresight of these wise minds remind us that this summit may not be the end of the journey.
However, it is difficult to predict the likelihood of a fundamental shift in the underlying technology. It currently appears that tech scholars are building the next-generation AI models, while entrepreneurs are scaling the current generation. The hallmark of this generation’s models is their massive consumption of Tokens, thus requiring vast data centers.
Regardless of the path forward, it is conceivable that consumers will continue to be the biggest beneficiaries of the AI generation.
For more on the history of AI scientists, please visit the Roger’s Letter website.


