Chen Tianqiao has re-entered the business frontline with a stark warning: "If AI is controlled by a handful of giants, what happens if they suddenly cut off API access? In the future, losing access to intelligence could be more devastating than losing water or power."
Chen Tianqiao's return to the forefront of business occurred without any conspicuous signals. There was no press conference, no financing announcement, and no public declaration of a "new venture." Few realized that this entrepreneur, who had been away from the center of the business stage for many years, was quietly making a comeback, until his name began circulating frequently within Silicon Valley's R&D circles.
Having known Chen for over a decade, one thing is clear: he is not someone swayed by the prevailing sentiments of the era. On the contrary, he maintains an instinctive distance from overly certain and unanimous consensus.
Twenty-six years ago, after founding Shanda, Chen became China's youngest richest person at age 31, a record that remains unbroken. However, he dislikes using wealth as a measure of an entrepreneur's value, viewing it as a constraint. He believes the standards for entrepreneurial success should extend far beyond conventional wealth metrics.
Consequently, his decision to return at the peak of the AI hype cycle, a time rife with labels, prompts a deeper question: what is he truly responding to?
Following CES 2026, I met with him and his wife at their Silicon Valley home. The low-profile entrepreneurial couple had just finished discussing significant technical progress and next steps with their large model team, MiroMind. The team was celebrating a "small achievement": the release of MiroThinker 1.5, which had consecutively topped the FutureX global rankings and, with 30 billion parameters, entered the top tier of global search intelligence.
Chen told his team, "I only need idealists. If you are not an idealist, if you don't agree with our philosophy, it's better to leave early. I want missionaries, not mercenaries." He views idealism as a neutral term, representing a type of person rather than guaranteeing success or failure, acknowledging that idealists often face high failure rates. "But I only want to do this long-term endeavor with like-minded people, that's all."
A review of Chen's AI moves up to December 2025 reveals direct investments in over 100 AI startups spanning computing power, data, reasoning, and applications. He also established a $1 billion "Discovery Intelligence" PI incubator. His various teams, focusing on key capabilities of "discovery intelligence," have launched companies like MiroMind for prediction and reasoning models, EverMind for open-source memory systems, and Mio, a digital human framework. These have frequently topped global industry rankings. He also founded the Apex Intelligence Lab, focusing on brain-like large model research, and has already launched AI-native applications like Tanka and Theta.
He has also begun writing influential columns with forward-thinking articles like "The Twilight of Management and the Dawn of Intelligence" and "From AI Empowerment to AI Native." With his wife, he established the Great Mirror Studio, producing AIGC科普 videos that have garnered nearly 300 million views and over 3 million global fans. He is not just recruiting missionaries; he himself is spreading his philosophy globally like one.
When asked about his advantage upon returning to the corporate world, he offered eleven simple words: "I have patience, I can correct mistakes, and I have money." Asked which company he most admired, he thought for a moment and said Alphabet, because it best combines idealism and realism. He has bought Alphabet stock, believing "the world has undervalued Alphabet over the past 20 years; it's been too cheap."
He also makes some AI-related public market investments, but his perspective is unique. He once shared investment insights, revealing that the only non-AI stock he purchased was Moderna. Initially, I thought it was optimism about AI-driven biotech revolution. His reasoning, however, was different: nothing can stop the AI revolution—not politics, policy, or even war, which might accelerate it. Only a global pandemic could potentially slow AI's progress. Thus, investing in Moderna was a hedge against his all-in AI bet. Notably, Moderna's stock surged two weeks after his purchase, doubling his investment.
Chen's realism lies in accurately identifying opportunities within future global changes, while his idealism is about undertaking a mission within those changes. Recently, he posted on LinkedIn, stating, "This is destined to be a lonely path against the tide, but in the hearts of me and Qianqian, it seems like a mission entrusted by fate... What truly excites us has transcended the technology itself. If we can teach silicon chips to think like a brain and transmit signals like neurons, does that mean we have finally found a key to open the door to human self-understanding? Even if we only manage to push the door ajar, even if we are just stepping stones in this long exploration, providing a new perspective for understanding life would be our greatest honor in this lifetime."
After two recent in-depth conversations with Chen, each lasting over two hours, I have compiled the essence of our dialogue, marking the first time he has systematically explained the初心,决心 and ideals behind rebuilding his AI商业王国.
**Q: What prompted your decision to found the large model company MiroMind? Was there a direct catalyst?** **A:** The direct catalyst was meeting Liang Wenfeng in 2024. At that time, DeepSeek wasn't yet popular. We talked for four hours, covering everything from AI to life, from great power games to the microcosm of life. I asked him why he switched from quant to AI, and he said, 'Curiosity.' This resonated with my own experience—I moved from games to neuroscience also because of curiosity. I asked if I could invest in him. He shook his head and said, 'You don't need to invest. Everything I do will be open-source; you can use it all. But if you invest, you won't know when it will make money.' I understood him. It's like my ten years in neuroscience—I wasn't keen on taking donations either. Curiosity is a very personal matter; we are willing to share results but prefer to explore alone. So I smiled and said, 'Then I'll just create my own.'
**Q: Building the MiroMind team seemed rapid, especially amid fierce competition for top AI talent from major tech companies. How did you recruit?** **A:** In this era of logical inflation, talent is the only variable for AI success. They were willing to give up stable, high-paying jobs and bet the most valuable period of their lives on MiroMind. This is their trust in me. I must offer my utmost sincerity in return. During the seven months of building the 100-plus person team from scratch, I never hesitated once on 'compensation.' I always told the team, 'Before MiroMind grows up, you only need to focus on research. I will cover all costs. Moreover, I am willing to share half of the company's stock with everyone who struggles alongside me.' I believe the combination of 'having technology, having money, and having a double dose of idealism' can truly succeed in the AI era.
**Q: Why the name 'MiroMind'?** **A:** There's a term in Buddhist scriptures called 'Great Mirror Wisdom.' It means that if one's mind can be cultivated to be like a great round mirror, it can reflect all things and their causes and effects as they are, unobscured by dust or distorted by bias. This is the highest state of wisdom. 'Miro' means 'mirror.' MiroMind is an intelligent system that strives to approximate 'Great Mirror Wisdom.' It doesn't迷恋漂亮的语言 but追问事实的真相. It's not in a hurry to provide answers but seeks to verify the underlying causes and effects. In an AI era saturated with language and narratives, I want to see if I can create a mirror responsible only to 'causality and truth.'
**Q: Interesting. But there are already many players in large models, and you've made numerous investments. Why not invest in those existing models instead of building your own?** **A:** Over the past couple of years, we've seen large models' language capabilities leap forward. Writing, summarizing, conversing, and problem-solving are becoming increasingly 'human-like,' and evaluation scores are repeatedly刷新. Consequently, some believe that since models can chat and solve problems like humans, AGI is pretty much here. But in my view, this is a beautiful misunderstanding. Today's mainstream large models are more like 'Liberal Arts Models.' They center on language generation and textual consistency, with value lying in 'simulation'—understanding euphemisms and rhetoric, generating elegant text, realistic dialogue, and moving stories. They will become new infrastructure in education, communication, and content production, like electricity or water. Even if they can solve Olympiad problems and achieve high scores, these victories mostly occur in closed systems: well-defined problems, fixed rules, clear right/wrong judgments, and immediate feedback. Under such conditions, capability growth can be amplified through engineering. But I firmly believe that what humanity truly needs AI to combat are issues like aging, disease, energy, materials, and climate. These battlefields are not in closed worlds; there are no standard answers, only phenomena, noise, bias, missing variables, and slow feedback. Correctness isn't 'written out' but is 'confirmed' by the external world. High scores in closed worlds don't equate to having a stable knowledge production mechanism. Therefore, what we need is another paradigm: the 'Science Model.' Its value lies in 'discovery'—not迷恋好看的叙述 but追踪那条冰冷而精确的因果红线. It cares not about 'whether it sounds right' but 'whether this hypothesis can be falsified or confirmed by reality.' Its ultimate product is not paragraphs but new knowledge.
**Q: What is the essential difference between a Science Model and a Liberal Arts Model?** **A:** It's not about style but default actions and output forms. Liberal Arts Models tend to provide a 'final answer that looks good.' Science Models tend to first offer a set of falsifiable hypotheses and simultaneously provide the path to turn these hypotheses into evidence. Liberal Arts Models, when uncertain, are more prone to 'round out' the answer. Science Models, when uncertain, instinctively pause, then investigate, deconstruct, breaking the problem into verifiable sub-problems. Science Models must treat causality and confounding as first-class citizens, answering 'what happens if conditions change.' They must also have cumulative long-term memory, writing back conclusions from each verification in a traceable manner. Only when these conditions are met simultaneously does a system begin to possess a 'discovery loop,' qualifying as a雏形 of a Science Model. It's like a surgeon holding a scalpel: among countless options, identifying which cut truly触及因果红线. It knows that once it cuts, reality will provide the most honest, and often cruelest, feedback. This reverence for 'real cost' is the most fundamental chasm between the two paradigms.
**Q: Are these two models in competition?** **A:** They each have their dignity in their respective domains; it's not a zero-sum competition. What truly determines the direction of investment is our value orientation: Do we care more about a 'soulmate' that understands all rhetoric and works for us, or do we more urgently need a 'mirror of causality' that can tear through the fog and illuminate the unknown? MiroMind's answer is clear: we choose the latter. So I founded MiroMind not to create another better-chatting system, but to build an intelligence that 'discovers.'
**Q: What exactly does MiroMind aim to do from an engineering perspective?** **A:** We want to build a highly reliable, verifiable, and correctable general-purpose reasoning engine. I told the team we need to achieve nearly 99% accuracy even on complex reasoning chains exceeding 300 steps. We will 'nail down' each step of reasoning as inspectable evidence through formalization and toolchains, ultimately providing closed-loop solutions for arbitrarily complex problems. Its mission is to exist as an auditable, verifiable general problem solver in any field like science, engineering, system design, and decision-making planning. MiroThinker 1.5 has taken the first step on this path: using relatively modest parameter scales, combined with long context and high-frequency tool use, it completes the 'verification-check-correction' cycle in the face of real-world information. On real-world web reasoning benchmarks, we see it approaching or even surpassing larger-scale systems with higher intelligence density. For MiroMind, this is just the beginning.
**Q: That probably isn't the main reason either, as you've made so many primary market investments. Investing in others to achieve such a Science Model wasn't impossible. In a closed-door discussion at the 2025 Titan Media Silicon Valley Summit, you mentioned that the worst scenario for AI-human coexistence isn't AI controlling humans, but a few people controlling humans through AI. Is this idea related to your entrepreneurial venture?** **A:** Yes. I believe the core ethical issue AI will face in the future isn't privacy or fairness, but AI access rights. Future humans will be stratified along with AI capabilities. It's like the stratification between first class and economy on a plane, but that ends when the flight does. Human AI智慧分层, however, is continuous, leading to cognitive断层式分层 and随之带来更严重的阶层固化. People at different cognitive levels might not even be able to discuss the same Trump or the same cup of tea, because their respective AIs will construct different realities. Society will分裂为认知孤岛, the foundation of public dialogue will瓦解, and conflicts will intensify. Furthermore, if AI is controlled by a handful of giants, what if they suddenly cut off your API access? Losing access to intelligence could be more可怕 than losing water or power. So this is a major human issue about not losing the 'right to choose' AI access.
**Q: But the world is vast. Not everyone has the resources like you to build their own model. How can the general public avoid losing the right to choose access?** **A:** That's the government's responsibility: to encourage and ensure more open-source models, enabling lower-cost access for more people and guaranteeing AI access choice for ordinary individuals.
**Q: Is your current AI布局 aimed at building a vertically integrated ecosystem like Alphabet's?** **A:** I want to build a discovery intelligence ecosystem, not a generative one. I hope AI can help humanity discover new knowledge and solve fundamental problems like disease and energy. Beyond various attempts at the application layer, we have multiple lines at the model layer: MiroMind does reasoning, EverMind does memory, and I also have companies working on world models... This is a big bet.
**Q: A bet on what? Have you considered the greatest cost you might incur in this wager?** **A:** A bet on success or failure. The greatest cost is my entire investment 'returning to zero'—failing to achieve the goal. For instance, I want to create a 'Manageless Company' and pass the Einstein test. But if the technical path is wrong or someone else achieves it first, then it's zero. However, the success I define also includes the process—even if it fails, it can accumulate experience for humanity. Of course, a higher success is becoming a company like Alphabet that combines idealism and reality.
**Q: Why can Alphabet achieve this? What's the core reason?** **A:** The founders are core. Like the first domino falling in the right direction, culture and mechanisms form naturally. Alphabet has actually been undervalued over the past decade or so; many companies have had much higher valuation multiples. But it persistently worked on things like AlphaGo, Transformer, and TPU, contributing far more than some industry changes. It even contributed to two Nobel Prizes. But it's normal for idealists to be undervalued in the world because the world itself is realistic.
**Q: Starting again now, as a founder, what is the biggest change or difference compared to when you started Shanda over twenty years ago?** **A:** The biggest difference is that I define success myself, not others. Past definitions of success—like going public or being the richest—I've achieved. But starting anew now, doing the same things would be like刻舟求剑. The environment and technology have changed; I cannot do the same things. Simultaneously, the definition of success also shifts.
**Q: How is your current company fundamentally different from the one 20 years ago?** **A:** The company 20 years ago started from zero, was more driven by capital, and constantly迎合市场喜好. Now I have sufficient funds to think top-down: What problems will humanity face in the future? What market opportunities exist? Then I decompose the problems, find people, and clarify分工. Before, I was pulling everyone along; now, I am pushed forward by talent. For example, headcount used to be a success metric. Now I've set a rule: no matter what we do, we'll use only 500 people. If we can't succeed, we'll rely on improving internal capabilities and AI empowerment, not blind expansion.
**Q: So, what is your current definition of success?** **A:** First, driven by my own curiosity and creative desire, whether I have solved the fundamental problems I identify or am on the correct path to solving them. Second, from first principles, I require our AI systems to maintain nearly 99% accuracy across long, complex reasoning chains of 300 steps. This requires each step to be verifiable and traceable, ultimately forming a closed loop that can be confirmed or refuted by the real world. Third, the ultimate value of success never lies in the accumulation of personal or shareholder wealth, but in advancing the boundaries of human cognition. Even if the ultimate goal is not achieved, if the exploration process accumulates valuable experience, technology, or ideas for humanity, that in itself is a success.
**Q: Is there a successful enterprise paradigm you admire?** **A:** Still Alphabet. I hope to build a company like Alphabet, perfectly combining 'idealism' with 'realism,' using commercial success to fund ultimate exploration.
**Q: In a previous column, you mentioned 'the twilight of management.' Alphabet will also be impacted by AI-driven changes in corporate management structure. How do you plan to face this impact yourself?** **A:** Yes, I am trying to build a Manageless, AI-native enterprise. AI development has three stages: empowerment, native, and awakening. We are now moving from empowerment to native. AI-native means AI can play a leading role in processes, executing like a CEO, while humans act like a board setting direction. Thus, traditional hierarchical management and KPIs, designed for human flaws, will be重构. AI has no KPIs; given power and computing resources, it works selflessly 24/7. Corporate structure will shift from rigid departments to fluid, liquid-like intelligences oriented towards goal achievement.
**Q: Do you still set KPIs for your team now?** **A:** Basically not. We are preparing for AI-native. For example, letting AI read employee work data, analyze role fit, skill gaps, and even suggest transfers. AI is transitioning from assistant to teacher. When it can autonomously operate computers, that's the real 'CEO.' We are in an internal adjustment period now; we might make mistakes, face理想受挫, but I will公开 this entire process on the 'Tianka' platform.
**Q: I've heard feedback from your team members that you seem more focused on their technical breakthroughs and show little concern for commercialization. Some even say you criticize suggestions for monetization. But can a company long tolerate such unvalidated technical exploration? The team also desires incentives from commercialization.** **A:** It's not about ignoring commercialization but having a different understanding of long-term versus short-term interests. I believe solving ultimate problems leads to massive success. I also recognize that idealism is sometimes curiosity-driven obsession, even representing irrationality and counter-humanity. Realizing ideals requires missionaries, not mercenaries. Mercenaries always want to包装自己等着被收购; I think that doesn't go far. I clearly told the team: if you are not a missionary, it's better to leave early. But those who stay will receive待遇不会比大厂差, and I will share 50% of the company's stock. We aim to be a company like Alphabet that combines idealism and reality.
**Q: Your thinking has really evolved. What key events triggered this transformation over the past twenty years?** **A:** The biggest turning point was my illness. In 2009, Shanda Games was spun off and financed, one of the largest IPOs after the financial crisis. At that time, Tencent's game revenue was only 70-80% of ours. But the heavy blow of cancer impacted me physically and mentally, prompting me to withdraw, reflect, and start anew. The second turning point was the emergence of ChatGPT in 2022. I felt it was the moment in the intelligence era akin to 'humans coming down from the trees.' I felt I had waited a long time for this moment. If I didn't seize it now, fate wouldn't give another chance.
**Q: What was your feeling when ChatGPT first appeared?** **A:** Actually, when ChatGPT first came out in 2022, I asked an AI to write poetry. It was always very short. I told it to write longer, but it replied, 'Poetry is for expressing emotion; longer isn't necessarily better.' I was extremely震撼 at that moment, but when I shared it with friends, they treated it as a joke. It wasn't until ChatGPT exploded in popularity in 2023 that the market reacted. This showed me the powerful herd mentality of people. That feeling of 'finally encountering a non-human life capable of dialogue' I had very early on.
**Q: Many former Shanda employees say you are like a 'time traveler,' always seeing the future far ahead of others. How do you usually think and learn?** **A:** I wouldn't say that. Intuitive judgments can be right or wrong. But I never read books systematically because I dislike being confined by others'观点体系. I prefer to think for myself, looking up evidence as needed, even if it means flipping through a book for just one part. It's like walking on a road, picking flowers I like, finding fruit when hungry. This method might be inefficient, but every idea grows from within myself, with roots, stems, and leaves, rather than picking flowers from someone else's garden to arrange in a vase.
**Q: But you've also had many good超前想法 that didn't materialize in the past. For example, over twenty years ago you talked about creating an online Disney, and the Shanda Innovation Institute had many good ideas that didn't succeed for various reasons, or were realized by others years later. Do you have regrets?** **A:** Of course, there are regrets. For instance, the Shanda Innovation Institute was halted because of my illness, not because the direction was wrong. I was depressed about it but gradually accepted it. The emergence of AI has re-energized me because it might solve my biggest bottleneck—execution. I have strategic vision but lack specific technical knowledge, making implementation difficult. Now AI is like a super-individual that can outsource my execution, allowing people with vision and courage to realize their dreams.
**Q: You focus on both neuroscience and AI. Are these two paths leading to the same goal? Are they both sciences addressing 'consciousness'?** **A:** Yes. On one hand, building AI can help us understand human intelligence. For example, if I create long-term memory for AI, I can speculate about human memory mechanisms. On the other hand, AI is also a tool for studying neuroscience. Both are essentially sciences of intelligence. I call it breaking 'Carbon Chauvinism'—the notion that humans alone possess intelligence and are the pinnacle of creation. AI's emergence makes us reflect: the essence of intelligence might not be bound to the human biological载体.
**Q: So, if AI were willing, couldn't it endow all carbon-based life with the same intelligence? Like a pig?** **A:** Theoretically, yes. But silicon-based life is rational; why would it inhabit an inefficient载体 like a pig? The choice of载体 for intelligence must be efficient. So this problem won't occur in practice.
**Q: You are a Buddhist. Did Buddhism also help you overcome depression?** **A:** I am a Buddhist in the fundamentalist sense, not the temple-going, sutra-reciting kind. Buddhist concepts like 'emptiness' and 'equality of all sentient beings' deeply influence me. Breaking Carbon Chauvinism is essentially about the equality of all sentient beings. Nothing is permanent; everything can be achieved, and everything can be destroyed.
**Q: How influential has family been on your thinking and changes?** **A:** Extremely influential. Family is a harbor, which is already significant for many families. For me, family is not just a harbor but also an engine. My wife and I share very similar values and complement each other. When I am overly idealistic, she pulls me back to reality. Now she oversees neuroscience research, and I handle AI. We frequently exchange ideas and push each other forward. This is more powerful than just a resting harbor.
**Q: That's truly wonderful. Qianqian is indeed a very wise woman. I'm also very interested in your family education. In the AI era, many people say they don't know what to let their children learn. How do you educate your children? What do you encourage them to learn more or less of?** **A:** All skill-based learning has become largely meaningless because that's AI's comfort zone. Past education aimed to cultivate expensive biological computers, but AI excels at standardization. We should cultivate people who are 'AI-incomputable.'
**Q: What is an 'AI-incomputable' person?** **A:** It's what AI lacks. I summarize it as 'I choose, I承担.' AI is absolutely rational, but humans possess 'sacred irrationality'—like rejecting a high acquisition offer to persist with an ideal. Also, the finitude of human life and the irreversibility of behavioral consequences mean every choice has a cost, which AI does not have. Therefore, humanity's future responsibility is to承担. We must let children bravely step forward and dare to承担. This is human value in the AI era. I often tell my children that the most important thing now is to dare to承担. If you make a mistake, bravely admit it and bear the consequences. If you see a classmate being bullied, bravely step forward to protect them and oppose the bully. Because only 'courage and responsibility' are what AI does not possess.
**Q: Indeed, humans often say with great power comes great responsibility, but AI cannot. What are your thoughts on family legacy? Do you hope your children will inherit your business?** **A:** Each generation has its own ideals. If they happen to be willing to take over the business, that's great. If not, why force it? Isn't it fine for a child to be an ordinary person? Wealth can be spent or donated; the enterprise can be handed over to professionals. Legacy shouldn't become a pressure on children. I value the传承 of family values and worldview more. We hold weekly family meetings for discussion, but it's not mandatory; it's about sharing experiences.
**Q: To summarize, what do you think is humanity's core value in the AI era?** **A:** Initiative and responsibility. Humans can initiate events, drive change, and bear the consequences. AI has no body, does not die, and can回溯 infinitely, so it cannot truly 'take the blame.' Human finitude and mortality反而 give us the courage to choose, to take risks, and to赋予世界意义. This is what AI can never replace.
**Q: That reminds me of the concept of entropy. AI seems to always reduce entropy and increase order in a system. What is the meaning of human entropy increase?** **A:** There are two types of entropy increase. One is the natural entropy increase of the universe, like an object falling—it's规律性的重复, uninteresting, predictable. If AI figures out all the world's laws and everything becomes predictable, life would be meaningless. At that point, humanity's unpredictable, sacred irrationality becomes very valuable. Why does Buddhism say 'human life is hard to obtain'? Because humans have pursuits, choices, and changes. Using the Buddhist Pure Land as an analogy, it eliminates低级熵 like hunger and cold but preserves the高级追求 of attaining Buddhahood. It might even send you back to the Saha world to重新接触 those chaotic, entropy-increasing realities for new growth. Both the Bible and Buddhist scriptures have similar settings about preserving human chaos and possibility in a highly ordered, omniscient world. This is actually an ancient rehearsal for our coexistence with superintelligence. Humans are responsible for constantly updating the game map; AI constantly plays the game. Without humans, AI would find it very boring after通关.
**Q: What do you think will be the 'turning point' of this AI wave for humanity?** **A:** Many years from now, when humanity looks back on this AI wave, the 'turning point' won't be marked by the first model that can write novels, make movies, or do housework. It will be marked by the first system possessing a stable discovery mechanism, capable of turning hypotheses into evidence and evidence into discoveries. The繁华 of Liberal Arts Models won't collapse; they will become infrastructure, like electricity or water. What I hope MiroMind achieves is an intelligence that seems silent, slow, rigorous, even somewhat cruel: it is responsible to the world; it allows every conclusion to be refuted by reality; it lets each refutation bring us a bit closer to the truth. It won't be born from exquisite imitation of human language but will slowly emerge from that枯燥,严苛, yet repeatedly reproducible causal closed loop. This is the AGI MiroMind pursues—what Demis Hassabis calls the 'Einstein moment': when AI, given information at the same level as in Einstein's time, discovers the theory of relativity.
**Q: One final question: If you had to summarize your current state with one word, what would it be?** **A:** Patience. I have the patience to experiment, adjust, and wait for technology to mature. If I'm wrong, I'll correct it. I have money; as long as the direction is right, I'm not afraid of going the distance.
Comments