Beyond AGI: A Leveled and Operational Framework for AI Capabilities
Abstract
Artificial General Intelligence (AGI) has become one of the most controversial concepts in contemporary artificial intelligence research. This paper argues that AGI, when treated as a single overarching term, has degenerated into a “dirty test tube”: its vague intension and excessively broad extension render it incapable of guiding technological development, regulating academic discourse, or clarifying research objectives, while instead amplifying capital speculation and public anxiety. By examining the intrinsic limitations of large language models, we reveal the layered nature of intelligence and the irreducible trade-offs among different capability dimensions. To address this issue, we propose the LXAI (Leveled eXtended AI) framework, which decomposes artificial intelligence into a matrix of three hierarchical levels—reactive, learning, and metacognitive—across multiple functional dimensions, each equipped with operational definitions. Furthermore, we argue that mechanisms inspired by Eastern philosophy, particularly the Yin–Yang and Five-Elements paradigm, may offer novel pathways for overcoming the structural limitations of current AI architectures. (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6162887)
Thanks!