
Nvidia CEO Jensen Huang recently publicly claimed that “Artificial General Intelligence (AGI) has been achieved,” and added that AI systems autonomously operating a company or releasing low-cost applications for billions of users “is not impossible.” This is one of the most forceful public statements to date regarding the existence of AGI. However, the scientific community lacks a universally accepted definition of AGI and no major scientific or regulatory body has confirmed its arrival.
(Source: X)
Artificial General Intelligence (AGI) refers to AI capable of learning, reasoning, and adapting across various fields, similar to humans, rather than the narrow systems that excel only in specific tasks like writing or programming. Unlike current AI that requires building separate models for each task, AGI should theoretically be capable of cross-domain generalization without retraining for specific tasks.
Huang illustrates his point with an example: an AI that can autonomously build and expand online services for billions of users, requiring minimal human intervention for planning, execution, and iteration. If such a capability truly exists, it would mark a transformation of AI from a supportive tool into a system with autonomous operation—this is the core feature of the AGI he describes and the most attention-grabbing aspect of his declaration.
Lack of a universally accepted definition: Currently, there is no globally recognized standard for AGI technology. Different organizations and researchers have varying criteria for “general,” making it difficult to objectively verify claims of “achievement.”
Reliability limitations: Today’s AI frequently makes errors in long-tail scenarios and still shows significant weaknesses in common-sense reasoning about the real world—fundamental abilities that AGI should possess.
Unstable long-term planning: Most existing systems perform poorly on multi-step, long-duration tasks, which is one of the core capabilities expected of AGI.
No institutional certification: To date, no major scientific organization, AI safety group, or government regulator has officially confirmed that AGI has arrived.
Huang’s declaration, though controversial, points to a profound possibility. If AI truly reaches the level of AGI he describes, the impact would go far beyond technology: the ability to autonomously plan and expand large-scale software services could drastically reduce human labor costs in software development; AI agents operating companies could fundamentally change organizational structures; widespread low-cost applications for billions of users could disrupt current platform dominance by a few tech giants.
Currently, Huang’s statement fuels ongoing debate: has AI crossed that historic threshold, or is it merely approaching it? The direction of this debate will have far-reaching effects on Nvidia, the entire AI industry, and global regulatory frameworks.