Challenges Facing China’s AI Industry Development
Currently, the global competition in artificial intelligence (AI) technology is intensifying, and China’s AI industry is at a critical juncture of application leadership, foundational catch-up, and ecological breakthroughs. Facing external pressures such as computing power restrictions and talent competition, there are still many “bottlenecks” in areas ranging from high-end chips to foundational algorithms and from original innovation to industrial transformation.
International Competition Pressures AI Industry Development. Research shows that some Western countries have upgraded their policies towards China from single technology restrictions to systematic ecological blockades. Firstly, the “hard” blockade is intensifying. The U.S. has increased restrictions on the sale of AI chips to China, forcing many domestic innovation teams to slow down the development of large models due to “computing power hunger.” Secondly, there are “soft” ecological barriers. Nvidia’s graphics processing units (GPUs) dominate over 90% of the global market, and its unified computing architecture (CUDA) ecosystem has formed a closed-loop system of “hardware + software + developer community” over more than a decade. A domestic chip company in Shanghai reported that although its hardware computing power is close to the international mainstream level, customers are primarily concerned with whether it is compatible with CUDA. The issue is that chip replacement is not a simple hardware swap but involves a complete system migration of the development framework, operator library, debugging tools, and development habits. Millions of developers are deeply bound to the CUDA ecosystem, and the high cost of migration and long adaptation cycle make large-scale applications of domestic alternatives challenging even if performance meets standards. Thirdly, there is fierce competition for rule-making power. Global AI technology standards, governance norms, and cross-border data rules are largely dominated by Western countries. In early 2025, the DeepSeek large model made waves in the global market due to technological breakthroughs, prompting several Western countries to issue bans or initiate strict reviews. The reality warns us that technological leadership does not guarantee market access; lacking discourse power can hinder the international expansion of the industry.
Large Models Face Reliability Crises in Specialized Scenarios. While large models perform impressively in general conversations, their limitations become apparent in fields such as industrial inspection, medical diagnosis, and financial risk control, where precision and reliability are critical. A manufacturing company reported that its AI visual inspection system misclassified good products as defective due to slight changes in lighting, leading to defective products being released, necessitating manual re-inspection. The phrase “impressive in demonstrations, but fails on the production line” has become a reality for many companies implementing AI. The crux lies in the fact that the generalization capabilities exhibited by large models in open-domain tasks do not naturally transfer to specialized scenarios with near-zero tolerance for error. The gap from “being articulate” to “being reliably usable” represents a significant engineering challenge. The issue of “hallucinations” also cannot be ignored. In general scenarios, such errors may be minor flaws, but in contexts like medical dosages, legal judgments, and financial risk control, each instance of “seriously misleading information” can trigger irreversible risks. This exposes a fundamental flaw in large models: they are essentially pattern matchers rather than logical reasoners. Transitioning from “being able to speak” to “speaking the truth” and from “guessing answers” to “understanding causality” is a threshold that the industry must cross for deeper development.
High-Quality Datasets Still Fail to Meet Model Development Needs. Research indicates that a common issue is the abundance of “raw data” but a lack of “refining” capabilities. The scale of available private data globally far exceeds that of public data, but due to institutional barriers such as non-unified data standards, inadequate authorization mechanisms, and unclear compliance boundaries, a large amount of high-value data remains trapped in “data silos.” Although China possesses vast data resources, the data truly usable for training large models is severely lacking. In globally applicable datasets of 5 billion scale, the proportion of Chinese language data is only 1.3%. Additionally, bottlenecks in data circulation hinder China’s ability to fully convert its data scale advantages into core competitiveness. Copyright and legal risks are also on the rise. A company expanding overseas reported that its video generation model faced accusations of unauthorized scraping of overseas platform videos for training, resulting in collective lawsuits abroad. If data sovereignty and copyright barriers evolve into new trade weapons, they could cut off domestic companies’ legitimate access to high-quality international data resources.
Commercial Applications of the AI Industry Have Not Yet Formed a Complete Loop. The AI industry is at a crossroads between policy-driven and market-driven development, and sustainable business models are still being explored. Firstly, the “gears” of the industrial chain are misaligned. The computing power layer is expensive and insufficiently compatible with models, the model layer is general but lacks industry customization capabilities, and the application layer consists mostly of point tool products that do not communicate with each other, leading to a lack of effective engagement mechanisms among the three segments of computing power, models, and applications. Secondly, the profit models of companies are unclear. Domestic users have not yet formed a habit of paying, and many application companies can only rely on project-based contracts for sustenance or depend on government subsidies for “blood transfusions.” The transition from “policy blood transfusions” to “market blood production” is crucial for the industry to emerge from its nurturing phase. Thirdly, scaling products for replication is challenging. An industrial AI founder admitted, “Three factory pilot projects succeeded, but when the client said to switch production lines, the solution became useless. Without standardization, there can be no scaling; without scaling, there will always be cash burn.” The difference between a “model room” and a “commercial property” is not merely in individual technologies but in a standardized product system that is configurable, replicable, and maintainable, which requires standardized interfaces across all segments of the industrial chain.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.