• Like
  • Comment
  • Favorite

Tsinghua's Zhou Daoxu: AI in Finance is a "Double-Edged Sword"—How to Grasp the Hilt?

Deep News01-05

Artificial intelligence is currently reshaping the ecology and logic of the financial industry with unprecedented breadth and depth. From risk pricing to inclusive credit, from intelligent regulation to ecosystem construction, while AI empowers finance to improve quality and efficiency, it also brings profound challenges such as algorithmic black boxes, data divides, and new types of systemic risks.

In August 2025, the State Council issued the "Opinions on Deepening the Implementation of the 'AI+' Initiative," explicitly stating the need to vigorously promote the large-scale and commercial application of artificial intelligence and drive its deep integration with key sectors like finance. The Central Economic Work Conference held in December 2025 further proposed to deepen and expand the "AI+" initiative and improve related governance. As a data-intensive industry, finance is a key focal point for this strategic deployment.

Against this backdrop, what sparks will fly from the collision of the financial industry and artificial intelligence? Where lie the future potential and governance boundaries for its development? Addressing these questions, an exclusive interview was conducted with Zhou Daoxu, Director of the Financial Security Research Center at Tsinghua University's PBC School of Finance and an expert member of the Beijing Municipal Expert Advisory Committee for the 15th Five-Year Plan.

Zhou Daoxu believes that the application of AI in the financial sector is far from reaching its ceiling; its role is evolving from an auxiliary tool to collaborative and even autonomous decision-making, and it will propel the industry from an "experience-driven" paradigm towards a new paradigm of "data-driven + algorithm-driven." Just as driving requires traffic rules, the development of "AI + Finance" also needs a clear, flexible, and forward-looking regulatory framework.

Promoting Development through Governance: Paving a More Stable Runway for "AI+Finance"

AI has transitioned from a phase of technological exploration into a new cycle of systemic integration and standardized development. As a dual-intensive industry in terms of both data and technology, AI governance in finance should revolve around the principles of "controllable, trustworthy, and sustainable." Specifically, efforts can focus on several key dimensions.

The primary task is to build a mechanism for "algorithm compliance and transparency." AI decision-making should not become a "black box," especially in areas such as credit approval, risk assessment, and investment advice. There is a need to establish systems for algorithm registration, explanatory notes, and third-party audits. In fact, since 2024, China has been piloting an "algorithm instruction manual" system in some financial institutions, requiring key AI models to possess features of traceability, interpretability, and verifiability.

The second aspect is data governance and privacy protection. The effectiveness of AI relies on high-quality data, but issues like data misuse, leakage, and discriminatory use accompany this dependency. This requires strict enforcement of the Data Security Law and the Personal Information Protection Law, promoting the classification, grading, authorized use, and desensitization of financial data, and actively exploring the implementation of privacy-preserving computation technologies that enable "data usability without visibility" in financial scenarios.

Furthermore, there is a need to establish a dynamic risk monitoring system. AI could trigger new types of systemic risks, such as pro-cyclical behavior caused by model homogenization or algorithmic resonance driven by public sentiment. It is suggested that regulatory agencies and financial institutions jointly develop an "AI risk dashboard" to monitor key risk indicators like model bias, data drift, and anomalous decisions in real-time.

Finally, the refinement of ethical norms and liability attribution is crucial. When AI decisions cause losses, how should the responsible entity be determined? Efforts should be made to promote the fundamental principle of "human ultimate responsibility," clarify the legal responsibilities of parties involved in AI development, deployment, and use, and explore the formulation of financial AI ethics guidelines to prevent issues like algorithmic discrimination and lack of fairness at the source.

In summary, governance is not about restricting development but about paving a more stable runway for "AI + Finance." The prevailing philosophy is shifting from "develop first, govern later" to "governing while developing," or even "promoting development through governance."

Currently, AI is mainly applied in the financial industry to optimize processes and external services, and its role remains auxiliary, not yet capable of replacing human decision-making. Looking ahead, what potential and space for application of AI in the financial industry do you foresee?

The application of AI in the financial sector is far from reaching its ceiling. Currently, its role is indeed "auxiliary," but in the future, it will progressively move towards "collaborative" and even "autonomous." This evolution is likely to open up new possibilities in several directions.

One direction is the shift from "process optimization" to "decision-making restructuring." While AI currently focuses on process segments like customer service, underwriting, and anti-money laundering, it may in the future play a more critical role in core decision-making areas such as investment research, asset allocation, and credit pricing. For example, a "financial brain" based on multimodal large language models could integrate multi-source information including macroeconomic data, industry trends, and corporate sentiment, providing all-weather, multi-dimensional intelligent support for investment decisions.

Another direction is the move from "point solutions" to "ecosystem synergy." AI will drive the deep integration of finance with industrial, governmental, and social data, constructing "context-aware financial intelligent agents." For instance, an AI-based supply chain finance platform could analyze the operational data of upstream and downstream enterprises in real-time, automatically triggering credit granting, loan disbursement, and risk control processes, thereby achieving precise targeting of financial resources.

An important additional direction is the transition from "passive response" to "active foresight." AI has already demonstrated advantages in risk early warning, fraud detection, and market volatility prediction, and will further advance towards "preemptive intervention." For example, using time-series forecasting models and complex network analysis, systems could potentially identify the transmission paths of regional financial risks weeks in advance, providing regulators with valuable lead time for decision-making simulations.

The essence of finance is "risk management," while the core of AI is "understanding patterns." The deep integration of the two will propel finance from an "experience-driven" model towards a new paradigm of "data-driven + algorithm-driven."

Confronting New Risks: The Deep Application of AI Requires "Traffic Rules"

From the specialized perspective of financial security, the deep application of AI also introduces unprecedented new risks to the financial system. Which security risks currently demand the most vigilance? What countermeasures do you propose?

AI is like a "double-edged sword"; while enhancing efficiency, it also introduces the following categories of new risks that require high alertness:

First, model risk and algorithmic resonance. If multiple financial institutions adopt similar AI models, it could lead to "collective misjudgment," triggering systemic cascades under extreme market conditions. The significant losses suffered by a certain quantitative fund in 2023 due to algorithmic homogenization have already sounded an alarm. In response, efforts should promote model diversity assessments, encourage differentiated algorithm design, and establish an "algorithmic stress testing" mechanism.

Second, data poisoning and adversarial attacks. AI models rely on training data, but this data can be maliciously injected with noise or counterfeit samples, leading to model decision-making failures. Instances have already emerged where fraudsters use generative AI to forge faces, voices, and transaction records to bypass risk control systems. Therefore, it is essential to actively develop "AI security defense technologies" such as adversarial training, anomaly detection, and dynamic verification.

Third, ethical lapses and fairness deficits. Algorithms can amplify biases present in historical data, leading to "digital discrimination," such as systematically lower credit limits for certain demographic groups. This should be addressed through a dual approach involving "fairness auditing" processes, combining technical means like fairness-constrained algorithms with institutional designs such as diverse review mechanisms.

Furthermore, the "regulatory gray area" arising from legal lag cannot be ignored. The iteration speed of AI far exceeds the pace of regulatory updates. It is recommended to actively promote "RegTech" (Regulatory Technology), using AI to monitor AI, thereby achieving real-time, precise, and penetrating supervision, and to accelerate the introduction of the "Artificial Intelligence Law" and related implementation rules for financial AI.

Financial security is a crucial component of national security. In the AI era, we must adopt the concept of "integrating technological security with financial security," building a full-chain protection system covering "technology—data—models—application."

In balancing the dual objectives of leveraging AI to enhance financial quality and efficiency while maintaining risk control, what can financial regulation do? How can a balance between "deregulation" and "control" be found?

Balancing "deregulation" and "control" essentially means balancing "innovation" and "security." In recent years, China's financial regulation has achieved positive results in encouraging innovation and preventing risks. To achieve dynamic unity between innovation and security in the future, regulation can further focus on the following aspects.

On one hand, deepening "regulatory sandboxes" and innovation pilots is key. Allowing financial institutions to test new AI products and models in a controlled environment enables regulators to simultaneously observe risks, accumulate experience, and refine rules. Multiple batches of fintech sandbox pilots have been conducted in places like Beijing, Shanghai, and Shenzhen; the future should involve expanding their scope, deepening the scenarios, and particularly encouraging exploration in strategic areas like inclusive finance, green finance, and pension finance.

On the other hand, there should be active development of "intelligent regulatory platforms." Utilizing AI technology to enhance regulatory efficiency achieves "using technology to manage technology." Examples include building a "National Financial AI Regulatory Database" to access key model operation logs; developing "Regulatory Intelligent Agents" to automatically identify违规 patterns and risk transmission paths; and constructing "Cross-Market Risk Early Warning Systems" to break down data silos.

Simultaneously, implementing "tiered and classified regulation" is also crucial. Based on the risk level, impact scope, and technological maturity of the AI application, differentiated regulatory requirements should be applied. Low-risk AI tools (e.g., intelligent customer service) could be subject to a filing system, while high-risk AI systems (e.g., autonomous trading models) would require a licensing system and continuous monitoring.

"Deregulation" does not mean laissez-faire, and "control" does not mean stifling control. The goal of regulation is to create a development environment that "encourages innovation, is包容审慎 (inclusive and prudent), and has clear bottom lines." Just as driving requires traffic rules, the development of "AI + Finance" also needs a clear, flexible, and forward-looking regulatory framework.

Reshaping the Curriculum System: Cultivating "Strategic Talents" Capable of Harnessing AI

As AI takes on more analytical and even decision-making tasks, the requirements for talent in the financial industry will also increase. As an academic researcher, how do you believe the path for cultivating financial talent in universities should keep pace with the times and undergo corresponding changes?

AI will not replace financial professionals, but it will replace those who do not understand AI. The essence of education is to face the future, and the future belongs to复合型人才 (interdisciplinary talents) who understand finance, technology, and, even more importantly, society. Therefore, the university cultivation system must transition from "knowledge transmission" to "capability reconstruction."

Specifically, the curriculum system needs to be reshaped first, strengthening the three-dimensional integration of "Finance + Technology + Ethics." Traditional finance courses need to embed practical modules like Python and machine learning, while also offering cutting-edge courses such as "AI Ethics" and "Algorithmic Governance" to cultivate students' values regarding technology for good.

Building on this, it is essential to promote the deep integration of "industry, academia, research, and application." Universities should establish joint laboratories and practical training bases with financial institutions, technology companies, and regulatory bodies, encouraging students to participate in real AI finance projects, undergoing full-cycle practical training from需求分析 (requirement analysis) and model building to compliance assessment.

The ultimate goal is to focus on cultivating "critical thinking and innovative leadership." AI excels at execution, but humans excel at questioning, critiquing, and creating. Financial education must strengthen the training of soft skills like complex problem-solving, systems thinking, and cross-boundary communication, thereby cultivating "strategic talents" capable of驾驭 (mastering/harnessing) AI, rather than merely operating it.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Report

Comment

empty
No comments yet
 
 
 
 

Most Discussed

 
 
 
 
 

7x24