On April 26, the sub-forum “Ultraintelligent Machine, Technical Imagination and Human Future” of Shanghai Forum 2025, was held at Think Tank Building, Fudan University’s Handan Campus.
The forum discussed the ethical implications of Super Artificial Intelligence and explore the relationship between ASI advancements and humanity’s future. “Our goal is to bridge diverse perspectives and generate new ideas through reflecting ASI,” said Professor YANG Qingfeng from Institute of Technology Ethics for Human Future, Fudan University, who moderated the sub-forum. The forum concluded with a consensus: ASI demands global, interdisciplinary collaboration to navigate its promises and perils.
Ultraintelligence, Epistemic Autonomy and Kant
“Cognitive autonomy in machines challenges our very understanding of human self-governance, blurring the lines between creator and creation,” says Professor Sven Bernecker, Alexander von Humboldt Professorship in Philosophy from University of Cologne & University of California, Irvine. Delving into the philosophical foundations of autonomy, he argued that artificial superintelligence (ASI) could achieve epistemic autonomy and redefine our understanding of autonomy.
Building on German philosopher Immanuel Kant’s framework of autonomy, where true self-governance begins with independent thinking, expands to consider others’ perspectives, and ultimately returns to refined self-reflection, Bernecker argues that ASI could utilize its own principles to solve problems and reach epistemic autonomy.
The Omega Agent: Merging the Wisdom of the Noosphere with the Universe
Professor PAN Tianqun from School of Philosophy, Nanjing University introduced the concept of the Ω-agent, a superintelligent entity that could surpass human capabilities within the noosphere—a realm of collective consciousness. Drawing from thinkers like Teilhard de Chardin, PAN envisioned a future where Ω-agents may one day extend beyond Earth, merging with the cosmos.
Ideology battles: Technological Ambitions or Cultural Fears?
Political scientist Christopher Coenen from Karlsruher Institut für Technologie traced evolution of Western visions of superintelligence, from Samuel Butler’s 19th-century satire to modern transhumanist ideologies. “Science fiction often masks dangerous assumptions,” Coenen remarked, urging caution against narratives that place humans against machines. The other path is to co-evolve, to learn from and even merge with superintelligence and in turn get a better understanding of ourselves, according to Coenen. His historical lens revealed how cultural beliefs shape contemporary AI discourse.
Beware of Illusion Control: Superintelligence Requires Caution
LIU Yongmou, Professor from School of Philosophy, Renmin University of China, delivered criticism of superintelligence research, labeling it a “dangerous fantasy”. LIU pointed out that relying on AGI’s formidable computational capabilities to address existing challenges poses significant risks, given that AGI itself remains an unresolved variable, “like extinguishing fire with gunpowder”.
He dismissed AGI as a marketing ploy by tech firms with little policy restrictions, arguing AI that resembles human beings to the excess spreads anxiety and terror. Liu advocated for the limited growth of AI —a focus on controllable, human-centric applications.
How Embodiment Dismantles the Myth of Orthogonal Intelligence
Professor Young E. Rhee, from Department of Philosophy, Dongguk University, challenged the theory that superintelligence would eliminate human beings for efficiency whether it’s hostile or not. From the perspective of embodied cognition, Rhee demonstrated how intelligence is shaped by physical and environmental interactions. “Goals emerge from a being’s lived experience, not abstract programming,” she said, debunking fears of ASI. Her analysis suggested that superintelligence, if achievable, would be constrained by its material conditions.
The Puzzle of Superalignment: A Dynamic Alliance Instead of a Rigid Equation
Professor YAN Hongxiu from School of Marxism, Shanghai Jiao Tong University, discussed the problem of how we align with AGI and to what extend the alignment should be. She identified four failure modes in aligning—the defects of training models, reward deception, the error in generalizations of goals, and instrumental convergence.
From Syntax to Semantics: A Physicist’s Take on AI
Professor Alberto Suárez from the Computer Science Department of Universidad Autónoma de Madrid, offered a technical counterpoint, exploring whether superintelligence could bridge the gap between symbolic logic and real-world meaning. Using quantum mechanics as an analogy, he illustrated how emergent semantics might arise from complex systems.
“Meaning is lost in abstraction but found in simplification,” Suárez concluded, acknowledging the unresolved tension between AI’s syntactic prowess and its lack of conscious intent.
ASI:Minority Reports
Professor YANG Qingfeng from Fudan University introduced the background, process, and all collaborators involved in the report writing, and elaborated on the main perspectives, structure, and content of the report. In his view, superintelligence is evolving from an imaginary issue into a real challenge, necessitating collaborative efforts from multiple stakeholders to address it. On the basis of reflecting on the technological, general philosophical, and ethical approaches to superintelligence, he proposed contractual ethics as a strategy for handling the problem of superintelligence.
“ASI should neither be regarded as an imaginative construct nor as an object solely shaped by existential anxiety. Instead, it should be viewed as a tangible reality.”Yang emphasized.
Writer: ZHOU Yiting
Proofreader: WANG Jingyang
Editor: WANG Mengqi, LI Yijie