Skip links

Can we pick up on signals of Artificial General Intelligence(AGI) advancing beyond humans?

Questneers : Sungook Hong (Seoul National University), Gunhee Kim (Seoul National University), Hyeondeuk Cheon (Seoul National University)

The remarkable advancement of generative artificial intelligence has rapidly spread interest and concern about artificial general intelligence (AGI) and super intelligence. In this regard, we need to prepare answers to several important questions from now on. How quickly will artificial general intelligence and super intelligence emerge? If artificial general intelligence arrives, what preparations are needed for peaceful coexistence between AI and humans? Most importantly, is there a way to detect signs that artificial general intelligence is escaping human control?

The rapid growth of generative artificial intelligence such as ChatGPT and SORA has strengthened the argument that artificial general intelligence (AGI) will emerge in the 2030s. Many AI researchers are confident that AGI will be developed soon, and some experts predict that super intelligence will also emerge soon after AGI is realized.

AGI is typically defined as artificial intelligence with the intelligence level of a 20-year-old human, while super intelligence is described as intelligence exceeding an IQ of 6000. If artificial intelligence that is as smart as humans, or even smarter than humans, is born, how can humans coexist with this technology?

Before considering the problem of coexistence with AGI or super intelligence, it is necessary to establish a consensus on the definition of AGI above all else. Without a clear definition, predictions about the possibility and timing of AGI’s arrival, and furthermore, the nature of the problems that AGI poses, can vary significantly. From an engineering perspective, the ‘generality’ of artificial intelligence means the ability to process new data or tasks not included in the training data. On the other hand, the generality of intelligence from the perspective of philosophy of science or cognitive science means the ability to solve various types of problems encountered in one’s environment. If we define the generality of intelligence as problem-solving through adaptation to environment, as in philosophy of science and cognitive science, then AGI can be expected to arrive somewhat later than AI field experts predict, because the development speed in robotics, which must connect to the physical world, is not fast. Therefore, for concerns about coexistence with AGI to be meaningful, it is important to establish a clear definition of AGI above all else, and this itself is an important grand quest.

When AGI arrives in a comprehensive sense, the most serious issue in coexistence with humans will be the employment problem above all else. The representative benefit that AGI will bring is efficiency above all else, and more repetitive and time-consuming tasks will be automated in more fields in the future. Thanks to this, humans are expected to be able to spend time on higher-level and many new tasks that did not exist before. AI’s job displacement will not be limited to simple tasks only. While it was previously thought that jobs related to simple labor such as transportation and logistics would be most likely to be replaced first by AI, recently even creative jobs are at risk due to AI. Therefore, the emergence of AGI will push human concerns about work, from simple tasks to high-level creation, into higher uncertainty.

In addition to employment issues, behind the efficiency that AGI will bring lie various problems that fundamentally hinder coexistence with humans. First, there is the sustainability problem of AI development aimed at AGI. AI so far has developed without considering realistic resource limitations such as energy problems and rare earth mining problems. However, as generative AI services requiring massive computational power have recently expanded, enormous amounts of electricity are being used for data center operation and cooling. If this situation continues, it is expected that not only the sustainability of energy supply but also the environmental impact will be difficult to bear.

Another important consideration for stable coexistence between AGI and humans is the ethics and fairness of AI. Current AI learns based on data generated by humans, so it is highly likely to inherit various biases and injustices from our world. Currently, this problem can be solved to some extent by inputting accurate principles and providing feedback. In fact, OpenAI and others have revealed that they are utilizing reinforcement learning technology through human feedback so that humans and AI can coexist. However, when AGI arrives, they will no longer need human feedback and will enhance their capabilities by collecting, analyzing, and judging data on their own. In such situations, it is questionable whether there are ways to enforce principles and feedback provided by humans. Moreover, current cutting-edge AI technology development is being conducted in a closed manner by a small number of companies, and the performance gap with the open-source community is growing over time. AGI developed in a closed manner raises growing concerns because third parties cannot transparently verify what data and labels were used in the process, what direction it is aiming for, etc.

While human coexistence with AGI raises both great expectations and concerns, discussion about programs that can detect in advance the moment when AI escapes human control is important above all else. Setting in advance the point or criteria when AI exceeds human expectations is still a difficult task scientifically and technologically. AI technology is like a black box to developers and researchers due to its complexity. Even in the case of AlphaGo, it is highly likely that AlphaGo’s developers did not know when AlphaGo became better at Go than humans. Nevertheless, for coexistence between AI and humans, multidisciplinary consensus and technology development to transparently detect signs that AI is escaping human control must continue.

The emergence of AGI can act as a double-edged sword for humanity. Discussion about predictions of the possibilities and signs of AGI and super intelligence, the changes they will bring in the future, and response measures is essential work for coexistence between AI and humans, and is a grand quest.