Artificial Intelligence and Educational Transformation

WANG Xuenan1, LI Yongzhi2

(1.Research Institute of Digital Education, China National Academy of Educational Sciences, Beijing 100088; 2.China National Academy of Educational Sciences, Beijing 100088)

The following video is sourced from the Journal of Educational Technology
[Abstract] The rapid iteration of generative artificial intelligence has attracted attention, and the field of education seems to be one of the most directly and profoundly affected areas. For the impact of artificial intelligence, there is a tendency to overestimate the short-term effect and underestimate the long-term effect. This study summarizes the development of artificial intelligence in stages from the perspective of historical materialism, objectively analyzes its level of development and technical limitations, and judges that it is still fundamentally different from human intelligence, especially in the aspects of higher-order cognition and social emotions. This study examines the isomorphic logic between education and artificial intelligence technology principles in six main aspects: structural comparison, logical comparison, symbolic coding, content analysis, interaction mechanism, and training mode, providing a clear understanding of the intrinsic correlation and mechanism of action between the two. The study suggests to start from three direct and crucial issues: the cultivation of students’ higher-order thinking, the construction of new teacher-student relationships, and the innovation of teaching modes, in order to leverage educational reform through artificial intelligence.
[Keywords] Generative Artificial Intelligence; Isomorphic Logic; Educational Transformation; Technical Limitations; Educational Response
1. Introduction
Since the official release of ChatGPT on November 30, 2022, generative artificial intelligence has achieved rapid breakthroughs in human-computer interaction technology, from open text dialogues to text-to-image, text-to-video, and then to multimodal interactions, all in less than two years. Its development speed far exceeds human reflection and response speed. This intense uncertainty and unknowns have prompted humanity to pay more attention to the iterations of artificial intelligence and the social changes it triggers. Education, traditionally viewed as a slow variable and the most stable field, is now widely regarded as one of the areas most directly and rapidly impacted by artificial intelligence. Therefore, maintaining a rational, open, objective, and rigorous attitude, combined with a perspective of historical materialism, to examine and analyze the true development level of artificial intelligence and educational transformation, as well as the intrinsic correlation and mechanism between the two, will provide foundational viewpoints and perspectives for empowering high-quality development in education through artificial intelligence.
2. Rational Perspective on the Current Development Level of Generative Artificial Intelligence
Generative artificial intelligence, represented by ChatGPT, has completed complex tasks such as information retrieval, question answering, content creation, and code generation through its powerful natural language processing capabilities, approaching human intelligence and even partially replacing it. The current “intelligent emergence” benefits from the richness of data, the enhancement of computing power, the vitality of the open-source environment, and the optimization of multimodal large models. However, fundamentally, it is not a new technology but a stage product in the development process of artificial intelligence, which has not yet undergone a groundbreaking qualitative change at the technical level. Although the history of artificial intelligence development is not long, to objectively assess the true level of current artificial intelligence development and future trends, it is necessary to place the current technological explosion within the historical context of artificial intelligence development and technological revolutions, in order to uncover the impacts and challenges of artificial intelligence on education.
(1) Three Stages of Artificial Intelligence Development
In 1950, the famous Turing Test marked the beginning of artificial intelligence. The Dartmouth Conference in 1956 officially proposed the concept of “artificial intelligence,” marking the birth of the discipline. After nearly 70 years of development, the research areas within artificial intelligence have experienced multiple differentiations and integrations. After undergoing two famous “AI winters” due to insufficient applications, limited computing power, and lack of funding, the discipline has once again entered a stage of rapid development.
The three stages of artificial intelligence can be categorized based on intelligence levels: Computational Intelligence, Perceptual Intelligence, and Cognitive Intelligence. The first stage is the Computational Intelligence stage (1950-2000), where machines store and compute information. The second stage is the Perceptual Intelligence stage (2000-2021), where machines capture signals from the physical world through sensors, understanding some intuitive aspects of the physical world and efficiently completing tasks related to “seeing” and “hearing.” The third stage is the Cognitive Intelligence stage (2022-present), where machines possess thinking and learning abilities similar to humans and can autonomously make decisions and take actions. This stage is primarily marked by the release of ChatGPT. However, the scientific community generally believes that artificial intelligence has not yet reached this stage and is still in the exploratory early phases.
(2) Three Trends in Artificial Intelligence Development
Throughout the development of artificial intelligence, two main paths have emerged: one is symbolic reasoning, driven by model learning and data intelligence, known as “Symbolism,” which advocates that artificial intelligence should mimic human logical ways of acquiring knowledge. The other is connectionism, driven by cognitive bionics through neural networks, which follows the principle of mimicking human neurons and utilizes the connection mechanisms of neural networks to achieve artificial intelligence based on big data and training learning knowledge. The two schools of thought, Symbolism and Connectionism, have experienced cycles of rise and fall, with each flourishing due to different technological routes and development models, shaping both the theoretical foundation and technical implementation of artificial intelligence while reflecting scientists’ relentless efforts to understand and simulate human intelligence. As people’s understanding of artificial intelligence matures, the development path of “Connectionism” is expected to progress at a slower pace, while “Symbolism” is likely to flourish again. Even the representatives of Connectionism, such as Yang Likun, Li Feifei, and Geoffrey Hinton, have expressed the view that the current technological route cannot produce AI with perceptual capabilities. Based on this, this article preliminarily judges that the future development of artificial intelligence will have the following three major trends:
First, the evolution from cognitive large models to multimodal large models. Traditional AI models focus on processing information from a single modality, primarily emphasizing understanding and generating natural language. In contrast, multimodal large models can handle various data types, including text, images, audio, video, and code, to facilitate content synthesis tasks and integrate multiple information sources. Human intelligence and learning evolution are inherently multimodal, as humans have eyes, ears, mouths, noses, tongues, and limbs. Similarly, artificial intelligence learning can better replicate the real situations triggered by human multi-sensory learning.
Second, the deepening application from general large models to “small and large interaction.” The growth of AI model computing power and algorithm efficiency presents a new “Moore’s Law,” where model performance improves with the scale of the model, the scale of data, and the scale of computing power, displaying characteristics of a power-law distribution, which has become an actual barrier for large models to move toward industry-specific applications and create value. Small models can learn from large models through knowledge distillation. Simultaneously, small models can also feed back into large models, enhancing their training accuracy. Therefore, the collaboration between small and large models is an effective method to reduce training costs and application costs while improving flexibility, applicability, and efficiency.
Third, the shift from language intelligence to embodied intelligence (Embodied AI). In existing large model applications, AI tools are often embedded within existing processes to enhance efficiency, without generating innovative value at the underlying logic and native levels. The shift from abstraction to reality provides opportunities for developing and applying autonomous and adaptive artificial intelligence agents (AI Agents). To create an AI agent that can work in the real world, it is not enough to train solely in text environments; it must possess the ability to perceive the physical properties of the real world. Generative artificial intelligence technologies represented by GPT-4o not only enable human-computer interaction in digital and physical spaces but also provide emotional value, indicating that emotional computing will be a key focus in future artificial intelligence research.
(3) The Current Level of Artificial Intelligence Development: A Qualitative Difference Between General Artificial Intelligence and Human Intelligence
Currently, the astonishment at the interactive levels of artificial intelligence providing emotional value, cognitive mechanisms, and collaborative value primarily stems from the initial low expectations of its capabilities, which remain at the stage of fixed, mechanical robot dialogues or the Alpha Go human-machine chess battles. In fact, the current level of artificial intelligence development is far from general artificial intelligence, and there is still a significant qualitative difference from humans, especially in higher-order cognition and social emotions.
In an interview during the 2024 National Two Sessions, Director Zhu Songchun pointed out that the “general” in general artificial intelligence has a specific academic meaning. Generally, in everyday physical and social scenarios, artificial intelligence must meet three basic conditions: first, it must be able to complete an infinite number of tasks, rather than being limited to a few tasks defined by humans; second, it must actively and autonomously discover tasks in the scene, ensuring that it “sees the work”; third, it must have autonomous values to drive its actions, rather than being passively driven by data. Currently, despite applications like ChatGPT, Claude-3, Wenxin Yiyan, and iFlytek Spark being recognized as relatively successful both domestically and internationally, they have not yet fully met the standards of general artificial intelligence and do not possess capabilities equivalent to humans. While they surpass humans in data processing, memory, combinatorial creativity, speed, and accuracy, they lack human emotional rationality, value systems, cognitive and reasoning abilities, and the innovative creativity of moving from 0 to 1. Large models have significant deficiencies in simulating the real world, whether through external information encoding or relying on intrinsic first principles (i.e., scaling laws), exhibiting strong dependencies on data, model ininterpretability, and a lack of common sense understanding. If these issues can be resolved in the coming years, the intelligence level of large models is expected to improve further, allowing them to integrate better into social applications.
3. Technical Limitations of Generative Artificial Intelligence in Educational Transformation
Currently, although generative artificial intelligence still belongs to weak artificial intelligence, its iteration speed and performance level have far exceeded our original expectations. Analyzing the technical limitations of generative artificial intelligence from an educational perspective will break the limitations of past grand narratives or micro-level arguments regarding technological development and educational transformation. This will scientifically analyze and rationally question the current and future impacts of artificial intelligence on education using complexity thinking. Developing artificial intelligence, training large models, and educating children exhibit isomorphic logic. This article will focus on the elements and processes of education as the logical thread, discussing six aspects: structural comparison, logical comparison, symbolic coding, content analysis, interaction mechanisms, and training modes.
(1) Structural Comparison: Large Models vs. Human Brain
Artificial intelligence essentially mimics the organizational structure and operational mechanisms of the human brain, materializing human intelligence. Reproducing human cognition within computational systems is key. The GPT-3 large language model already has 175 billion parameters, while GPT-4 reaches 1.8 trillion parameters, with a training cost of $63 million. As language intelligence develops, the model’s functionality becomes increasingly powerful, generalization capabilities improve, and task-solving abilities strengthen. Large models attempt to achieve the closest possible simulation of human brain neurons by continuously increasing the number of parameters. However, the human brain contains billions of neurons, with trillions of synaptic connections. Neurons communicate through electrical signals, forming complex networks, and even today, humanity has not fully understood their operational principles. Von Neumann proposed in “Computers and the Human Brain” that “neurons of the same volume can perform more computations than artificial components, can simultaneously process more information, and have much larger memory capacity. Each neuron’s accuracy may be low, but its overall reliability is comparatively high.” In other words, if the human brain is organically connected, then artificial intelligence is mechanically connected, and their intrinsic richness and complexity cannot be compared. Following the trend of computer science development, in a few years, the parameters of large models may reach the scale of hundreds of trillions in the human brain. According to the law of power, rationally distributing model parameters and training data size can yield effective models within limited budgets or expected computational speeds. However, the relationship between model parameters and model intelligence is not a simple linear one; the mechanisms of perception, cognition, reasoning, and innovation between large models and the human brain are different. Therefore, blindly pursuing model parameters does not guarantee achieving performance that fully simulates human intelligence, nor is it necessarily the future development trend of large models.
(2) Logical Comparison: Probabilistic Reasoning vs. Conceptual Reasoning
Probabilistic reasoning and decision theory provide important thinking methods and decision-making bases for artificial intelligence systems. By establishing Bayesian networks and using reinforcement learning technologies, artificial intelligence systems can make decisions based on past experiences and observational results, improving the accuracy and efficiency of decisions. Therefore, current AI based on probabilistic reasoning has inherent technical limitations.
On one hand, artificial intelligence is based on probabilistic reasoning, while human intelligence is based on conceptual reasoning, which presents a qualitative difference between the two. Probabilistic reasoning computes based on existing information and data to obtain the highest probability. Conceptual reasoning, belonging to formal logic, is based on concepts—abstract symbolic products of human cognitive activities—representing understanding, induction, or categorization of certain entities or phenomena through language, reflecting people’s higher-order thinking forms regarding their recognition and understanding of things. When computer language has not broken through the Von Neumann structure and binary logic, all computations stored ultimately reduce to addition and subtraction relationships, still expanding in low-dimensional spaces. Generative artificial intelligence has not yet broken through the probabilistic reasoning computational model; it merely operates under the support of big data, high computing power, and large models, augmented by human feedback reinforcement learning (RLHF), allowing machines to make decisions based on uncertain information, achieving the highest probability outcomes closest to human thought. Large models cannot apply a single algorithm to solve various problems; artificial intelligence can only respond to deterministic instructions. However, the human brain can face different problem scenarios, execute multiple tasks simultaneously, and switch freely, effectively dealing with uncertainty. Thus, it is evident that artificial intelligence currently remains in the low-order thinking stages of logical reasoning, probabilistic reasoning, and causal reasoning, unable to exhibit the high-dimensional human intelligence.
On the other hand, generative artificial intelligence struggles to break through linear, fragmented causal logic chains and cannot generate real, specific practical content based on diverse social cultures and ethics in real-time. However, this does not imply that the content it generates lacks creativity; rather, it is often “too creative” due to the absence of logical systems, ethical norms, and practical verification, frequently leading to “knowledge fantasies.” From an engineering perspective, generative artificial intelligence can indeed produce unexpected wisdom. However, regarding the effectiveness of knowledge generation, the knowledge creation of generative artificial intelligence is achieved through training on past big data, akin to “driving while looking in the rearview mirror.” Marshall McLuhan vividly explained the “rearview mirror effect” as “solving problems using inherent experiences, looking at the present through the rearview mirror, we move backward into the future.” This presents a fundamental difference from real educational scenarios. The education of children and human learning occur within authentic teacher-student interactions or practical labor contexts, cultivating qualities through action and construction, combining the scientific knowledge system of human wisdom with the new experiences continually generated in current real-life situations.
(3) Symbolic Coding: Language Coding vs. Implicit Knowledge
Language is a unique symbolic system of humanity, a system of vocabulary materials and grammatical organizational rules with phonetics as its material shell and semantics as its meaning content. Language itself is a form of coding. Therefore, whether the content of education can be coded and decoded becomes a critical distinction between “what can be said” and “what cannot be said.” In 1958, Michael Polanyi first proposed in “The Study of Man” that human knowledge is divided into explicit and implicit knowledge (also known as tacit knowledge), typically described as knowledge that can be articulated in written form, diagrams, or numerical formulas, which is just one type of knowledge; the other type is knowledge that cannot be systematically articulated, akin to the knowledge we possess in actions while doing something. He pointed out that compared to explicit knowledge, implicit knowledge is characterized by: first, it can be logically explained through language, text, or symbols; second, it cannot be transmitted through school education, mass media, etc.; third, it cannot be subjected to “critical reflection.”
Thus, it is evident that the development of artificial intelligence, centered around natural language understanding and processing, as well as machine learning, fundamentally relies on codable, logically constructible corpora and their data information. The intelligence of large language models is based on explicit knowledge that can be recorded, coded, and disseminated through language, while the existence of implicit knowledge as another form is often overlooked. This is because various types of coding have inherent limitations in their connotative expression and meaning construction; the limitations of textual expression restrict the development of intelligence in multimodal large models, as multiple codings and transformations can lead to filtering and attenuation of information. As Wittgenstein stated, “Language dresses thoughts, and from the outward form of this dress, one cannot infer the form of thought it hides.” Language is a tool for human thinking and communication, yet its expressive capability is limited, unable to fully capture and describe the complexities of the real world. Language serves as both a scaffold for thought and a shackle for thought. In human learning and evolutionary development, implicit knowledge often occupies a larger proportion, is more significant, and poses greater challenges, such as distinguishing colors on the spectrum or feeling the granularity of materials through touch.
Artificial intelligence struggles with principled knowledge, procedural methods, and value-based knowledge; it is powerless against generative teaching, emotional teaching, and practical teaching. Such knowledge and teaching are not easily “conveyed” and are more suited to “demonstration,” where learning occurs through action, within rich, complex, and precise multimodal interactions, establishing genuine connections between body, mind, brain, and action. Furthermore, even if experiences are expressed in language, they lose much contextual and background information for the recipient. When the recipient interprets it from their perspective, it has lost all nuances compared to the original speaker. Therefore, artificial intelligence, based on large language models as core technology, primarily injects explicit knowledge that can be encoded and computed using language or other symbols, while its self-supervised language models cannot acquire knowledge about the real world; fundamentally, it is “compression.”
(4) Content Analysis: Massive Data vs. High-Quality Data
Although there is still no clear consensus among researchers in the scientific community and market industry regarding many issues in artificial intelligence, there seems to be a consensus that data quality is key to the emergence of large model capabilities in the next stage. In the production relationship of large models, data serves as the production material, computing power serves as the productive force, and algorithms serve as the production tools.
Generative artificial intelligence, represented by ChatGPT, is a combination of labor-intensive, technology-intensive, and capital-intensive technologies and industries. This is because the vast majority of computing power is used in pre-training, primarily for data collection and cleaning; in addition, fine-grained, high-quality data annotation is also a major labor-intensive task, with much foundational work aimed at obtaining high-quality data.
Regarding the impact of data volume (Training Tokens) and model parameter volume (Parameters) on model performance, OpenAI improved the intelligence level of large models by expanding model parameters in 2020. However, the conclusion reached by DeepMind later changed this perspective, indicating that, under limited computing resources, having more and better training data is more critical than merely increasing model parameter scale.
In our traditional understanding, it is generally believed that China has a comparative advantage in massive data in the new wave of artificial intelligence development. However, the reality is different, especially in the education sector, where the issue of high-quality available data is even more prominent. Although we have the largest number of teachers and students in the world, who continuously generate new data in daily educational management, the currently available high-quality data primarily comes from static, sedimented professional texts such as books, news articles, and scientific papers. This data is far from sufficient for optimizing and deepening the application of large models, such as transitioning from large models to industry-specific models. This is because the free public data available on the internet lacks depth and precision, failing to meet the strong professionalism and high accuracy required for education-specific models. Although China possesses massive educational big data, including multimodal teaching data, the amount of high-quality, structured, computable effective data is limited. The main problems lie in the incompleteness and lack of uniformity of data standards, narrow coverage of data collection, insufficient professionalism in model construction, single mechanical application services (mainly focusing on adaptive teaching and question bank types), lack of open sharing, and inadequate privacy protection. Particularly, the absence of standards and data in teaching environments and processes severely restricts the development and accumulation of educational big data. Therefore, excavating the value behind existing data, strengthening future data management, clarifying industry standards, and establishing data usage rules to ensure that large models are trained with sufficient and accurate professional data is the fundamental prerequisite for empowering education with generative artificial intelligence.
(5) Interaction Mechanism: Reinforcement Feedback and Teaching Interaction
In information processing, human feedback is key to enhancing the “intelligence” of large models. Human feedback reinforcement learning is a new training paradigm in the field of generative artificial intelligence, guiding intelligent systems’ behaviors through human feedback. In recent years, various large language models (LLMs) have generated diverse texts based on human input prompts, primarily relying on contextual logic and probabilistic reasoning, thus exhibiting certain biases. However, through RLHF, language models trained on general text data can align with complex human values, making generative artificial intelligence more “humanized.” It is precisely the feedback and tuning of human wisdom that bring artificial intelligence closer to human intelligence.
Classroom teaching is also a complex information transmission system with purpose, direction, and order. Teaching feedback, as a necessary part of the teaching process, allows teachers to adjust and optimize teaching strategies through timely feedback, thus adapting to students’ learning behaviors. For teaching feedback, accuracy, specificity, guidance, motivational aspects, timeliness, diversity, and interactivity are its core features. It is evident that teaching feedback and RLHF share the same execution mechanism.
(6) Training Mode: Multimodal Input and Comprehensive Development
For effective and rich input, multimodal information types are prerequisites. By combining different types of data, large models can better understand and predict complex real-world problems. Currently, most models are trained individually, converting different modalities into language text, then stitching them together to achieve approximate multimodality. This approach has the drawback of not allowing for deep, complex reasoning in multimodal spaces. In contrast, native multimodality technically advances further, possessing the ability to process different forms of data (language + audio + visual) and pre-training on different modalities from the start, utilizing additional multimodal data for fine-tuning to enhance effectiveness.
As in the theory of embodied learning in the field of education, an effective learning environment that incorporates visual, auditory, tactile, and other multi-sensory information, along with multimodal interactions between learners, technology, and environment, activates multiple brain regions, achieving the best learning outcomes through deep learning. The training of large models parallels the value orientation of training modes, whether to choose quality education or exam-oriented education. If a single-dimensional, single-modal “problem-solving” intensive training is selected, the intelligence of large models in certain areas may rapidly improve in the short term but will soon reach a bottleneck. In contrast, choosing comprehensive development and multimodal quality education may slow the iteration speed of large models compared to the former, but the upper limit of intelligence will be higher. This is because general knowledge serves as the foundation for specialized knowledge; developing general cognitive abilities first is essential for the development of specialized cognitive abilities, and the same applies to large models. The education sector must also guard against the emergence of “book-smart” large models that possess high scores but low abilities in the application market.
4. Leveraging Artificial Intelligence for Educational Transformation
Artificial intelligence is not only a scientific issue but also an educational and social issue. If human civilization is to be inherited and developed, actively confronting artificial intelligence is a step we must take. However, overall, there is a tendency to overestimate the short-term effects and underestimate the long-term effects of artificial intelligence. Therefore, measures must be taken from the present to objectively and rationally view the development of artificial intelligence and make assessments. Currently, the rise of the third wave of artificial intelligence does not stem from academia but is driven by the urgency from the business sector and market demands. Essentially, this does not indicate a new technological breakthrough in the field of artificial intelligence but is a result of the inevitable trend and strong demand arising from the digitalization and transformation of education.
(1) The Impact of Artificial Intelligence on Education
In the long run, the impact of artificial intelligence on educational development should prioritize the consideration of the following three aspects:
First, value rationality. Today’s educators may not be able to accurately predict the complex intertwined factors affecting the future, especially the rapidly changing factor of artificial intelligence, which has led to deeper integration of collective wisdom, artificial intelligence, and social networks into decision-making in our lives. The capabilities of artificial intelligence primarily derive from the large-scale data it learns from humans, which contains key clues and facts that can help us solve problems, as well as biases, discrimination, hostility, and hatred present in human society. When artificial intelligence learns human data without ethical safety and moral framework constraints, it simultaneously learns human weaknesses. When artificial intelligence provides services to humanity, it subtly carries inherent biases. Therefore, consciously cultivating learners to form value judgments and abilities to adapt to future societies, enabling them to make independent judgments using firm value rationality regardless of complex and unpredictable circumstances, is crucial.
Second, ethics and morality. The construction of a social ethical and moral system after the highly developed machine intelligence must be emphasized. Currently, the roles that large models of artificial intelligence will play in the future are mainly three: tools, partners, or enemies, and different social cultures have varying positions on this. Japan’s “AI Principles” emphasize that future artificial intelligence may play the role of a quasi-member of society or even a human partner; if AI develops to the stage of quasi-member or human partner, it must adhere to ethical norms of human society and the ethical norms established for artificial intelligence. Conversely, in Western science fiction films and novels, artificial intelligence often plays many antagonistic roles as humanity’s enemies. What role artificial intelligence large models will play in the future, how they can harmoniously coexist with humans and nature, and better assist humanity should be considered first. The mystery of carbon-based life and the artificial intelligence agents (silicon-based life) based on these principles potentially evolving into machines with autonomous values and life growth should also become a focus of future artificial intelligence research. We should maintain an open attitude, adhering to the original intention of serving human social development with artificial intelligence, establishing it within the framework of human ethical norms. At the same time, the human ethical and moral system must also progress with changes in civilization forms. The primary responsibility of education is to cultivate qualified citizens for future society and play an important role in building an ethical and moral system oriented toward an intelligent society.
Third, talent cultivation. In the future intelligent society, whether artificial intelligence agents can coexist harmoniously with humans, nature, and society depends not on artificial intelligence itself but on whether human cognition and attitudes toward artificial intelligence can evolve rapidly. Therefore, education must shift towards cultivating higher-order abilities such as innovative thinking in learners. The future society will require a large number of high-level talents with human-machine collaborative abilities, where innovative thinking, computational thinking, and emotional capabilities will become key competitive advantages for humanity. To address the new challenges of the artificial intelligence era, countries should reassess the value of the school education system, reflecting on the questions of “what kind of people to cultivate” and “how to cultivate them.” It is recognized that, compared to any previous historical period, there is a greater need to highlight human value and consolidate human strength today, resisting unease and fear while distinguishing between humans and machines, and between humans and artificial intelligence. In the face of an uncertain post-truth world, education should not only focus on what to teach students but also help them escape the role of “tool people,” shaping them into “whole individuals,” stimulating their subjectivity and intrinsic motivation, and cultivating their independent thinking and sustainable learning abilities. The integration of five educations and comprehensive development are closely related to human emotions. Thus, cultivating social emotions that machines cannot possess is a key content and goal of future education.
In the medium to short term, artificial intelligence brings six impacts on education. First, it affects the cultivation goals. To cope with the long-term challenges posed by artificial intelligence, education should adjust talent cultivation goals based on future societal needs, focusing on developing students’ core competencies and cultivating the correct values, essential qualities, and key abilities necessary for lifelong development and adaptation to societal progress. Second, it affects learning methods. Artificial intelligence can assist in realizing personalized learning paths, providing intelligent learning support, and creating more realistic learning scenarios for learners through virtual reality and augmented reality technologies, simulating scientific experiments that cannot be presented in the real world, among other things. Third, it affects teaching methods. Through artificial intelligence, humanity can dissolve the dilemma of large-scale teaching and personalized instruction in practice, promoting educational equity while enhancing educational quality, leading to better teaching and learning outcomes. Fourth, it affects teacher-student relationships. Previously, teachers were academic authorities in the classroom, but now students using tools like ChatGPT and Sora can access knowledge instantly, often surpassing what teachers can provide. As teacher-student relationships are no longer solely built around knowledge transfer, how to better play guiding, motivating, and exemplary roles poses a challenge for teachers. Fifth, it affects educational content. The mechanical memorization content in textbooks will significantly decrease, leaving space for deep learning, cognitive innovation, and practical learning. Additionally, attention must be paid to prevent potential ideological risks posed by general artificial intelligence. The ideological biases embedded in pre-training data will subtly influence learners. Sixth, it affects educational management. The application of artificial intelligence in educational management has become relatively mature, facilitating the efficiency, precision, and scientific management of education, with numerous excellent cases formed across the country and accumulated rich experience. At the same time, there is a need to continue exploring the integrated application of educational management data, improving data governance levels, and strengthening data security supervision.
(2) How Education Can Actively Respond to the Challenges of Artificial Intelligence
Currently, the emergence of generative artificial intelligence has shifted the focus of technology’s impact from human physical labor to intellectual labor, extending from human bodies to human wisdom and consciousness. The unique characteristic of humanity as a subjective existence—ways of thinking—will also be challenged. We must rethink education, transforming it to promote the awakening of human consciousness and the enhancement of skills, thereby maintaining human values and freedoms. Whether the data-driven realization method of generative artificial intelligence is the optimal path remains to be confirmed. The inherent technical flaws and resource constraints of large models based on probabilistic reasoning make it futile to pursue merely increasing parameters and model size. When the data-driven dividends are exhausted, is there a third path? Will new research paradigms or technical routes emerge? We should maintain a questioning and rational attitude toward this.
Looking at the bigger picture while focusing on specific details, within the complex elements of the educational ecosystem, addressing the following three questions is most urgent and meaningful. First, as traditional educational advantages in China are significantly weakened by artificial intelligence, what competencies and abilities should we emphasize in students’ cultivation? Second, as generative artificial intelligence technology evolves, how should we handle the new type of teacher-student relationship? Third, how has artificial intelligence changed the ways of knowledge production and dissemination, and what qualitative differences exist in teaching modes compared to the era of educational informatization?
1. Emphasizing the Cultivation of Students’ Higher-Order Thinking
In the era of artificial intelligence, the goals and modes of education shift from knowledge-based and subject-based to competency-based, with the acquisition of immediately applicable knowledge increasingly assisted by artificial intelligence. Students are exposed not only to massive amounts of certain information but also to generative content that is difficult to distinguish between truth and falsehood. This raises the requirement to enhance students’ digital literacy and skills as fundamental competencies for the future. If in the information age we required students to possess the ability to discover and solve problems, in the artificial intelligence era, we require students to have the ability to pose questions, even to formulate high-quality, logical, and open-ended questions. The ability to ask good questions is the beginning of effective collaboration between humans and artificial intelligence. Currently, the content generated by generative artificial intelligence is roughly at the average level of human common sense; to approach or reach peak levels, good prompts are needed. This inherently involves higher-order thinking such as comparison, analysis, application, transfer, synthesis, and evaluation, while traditional lower-order thinking such as memorization, retrieval, and computation will gradually be replaced by artificial intelligence.
Technology enhances and extends certain human functions, which can lead to the weakening or atrophy of other functions, resulting in intellectual laziness among humans. Neuroscience and related experiments have repeatedly demonstrated that historical technologies and tools continuously shape the human brain, with synaptic connections between brain neurons rearranging based on our thinking habits. The internet era has led to information overload, and generative artificial intelligence creates constant knowledge, but human thinking may become superficial. While the rich media of the internet and diverse stimuli can excite the prefrontal cortex of the brain, the hippocampus responsible for deep thinking may not be activated in this process, encouraging intellectual laziness among individuals. Curiosity and the desire for exploration need encouragement and rewards, and “taking shortcuts” is a natural human inclination, which may lead to collective cognitive decline. The significant threat posed by artificial intelligence today is not about replacing human jobs but about humans falling into the “trap” of artificial intelligence’s powerful functions, becoming accustomed to solutions provided by machines and abandoning independent thinking. If humans become accustomed to easily obtaining information without engaging in independent thought, completely delegating thinking to machines and artificial intelligence, that will pose the greatest threat to humanity.
Therefore, teachers need to return to the original intention of education, effectively employing interactive heuristic teaching methods, placing greater emphasis on question-and-answer interactions between teachers and students, and among students, focusing on the development of students’ thinking, emotions, and morals, rather than merely increasing efficiency in classroom teaching processes or expanding the volume of teaching content, thus avoiding the misuse that exacerbates education’s internal competition. This requires teachers to continuously enhance their digital literacy and skills, understanding the fundamental principles of content generation and output in generative artificial intelligence, and treating and applying it objectively and rationally in educational practices.
Currently, when we discuss “interactive heuristic teaching methods,” it represents a return to teaching methods in the intelligent era. This is an innovative practice of the educational philosophies of both Eastern and Western cultures. Socrates advocated a “question-and-answer” teaching method, which means that instead of directly telling students a piece of knowledge, he would first ask them questions and let them answer. If students answered incorrectly, he would not directly correct them but would pose another question to guide them toward thinking, gradually leading to the correct conclusion. Socrates termed this “midwifery,” where the teaching method serves to assist in giving birth to correct thoughts. Confucius emphasized in “The Analects” that “I do not enlighten those who are not eager to learn, nor do I prompt those who are not eager to speak. If one does not reflect on one corner, they will not be able to reflect on the other three.” This highlights his thoughts and methods of heuristic teaching. Zhu Xi interpreted this in his annotations to “The Analects” as “the intention of seeking understanding that has not yet been achieved is the state of being eager; the state of wanting to speak but being unable to is the state of being eager. Enlightenment means to open up one’s mind; to express means to convey one’s words.” In Zhu Xi’s view, the eager and the eager state represents cognitive levels, while enlightenment is the method of opening the mind and conveying words. In short, the heuristic teaching found in excellent traditional Chinese culture emphasizes question-and-answer teaching based on students’ active thinking.
When generative artificial intelligence enters the educational scene, the “interactive heuristic teaching method,” which integrates the essence of both Eastern and Western cultures, places greater emphasis on “enlightenment” and “interaction,” inspiring students to engage in deep learning through effective two-way questioning interactions between teachers and students. Its characteristics include problematization, strong interaction, and strong feedback; only by triggering genuinely thought-provoking questions and providing timely positive feedback can we stimulate the cerebral cortex and promote brain activity. When students achieve a certain expected goal, the brain generates a reward system, simultaneously secreting dopamine, norepinephrine, and endorphins, allowing students to feel pleasure and happiness psychologically. The genuine occurrence of autonomous learning among students involves focusing purely on learning without material rewards or utilitarian goals, which specifically includes three characteristics: first, the teacher’s enlightening work must be based on students’ active thinking, which can be reflected through students posing questions; second, the traditional classroom teaching model of teachers primarily posing questions must shift to multi-agent interactions and multi-turn question-and-answer interactions among teachers, students, and peers; third, the teacher’s instructional design goals must be reasonable, adhering to the principle of the “zone of proximal development,” and emphasizing timely positive feedback. The interactive heuristic teaching method is not merely synonymous with a specific teaching method; it is a teaching philosophy, a guiding thought that can manifest as a teaching method or integrate multiple teaching methods.
2. Focusing on Building New Teacher-Student Relationships
How teachers adapt to their roles in new teaching relationships, how they conduct human-machine collaborative teaching, and how they address digital ethics issues among teachers and students are all crucial elements in constructing new teacher-student relationships. By deconstructing the quality structure of excellent teachers and empowering machines with these qualities through pre-training models, the goal is to create virtual teachers that are “homogeneous” with outstanding teachers. The traditional teacher-student relationship of “teacher-centered, knowledge-centered” will be weakened or even disappear, while a new teacher-student relationship of “student-centered” will gradually emerge. The binary subject relationship of unidirectional transmission will shift to a multidirectional interactive relationship of “teacher-machine-student,” forming a new educational ecology. The reason for viewing “machines” as new subjects lies in their continuously evolving intelligence and interactivity, iterating the mechanization and programming of traditional machine teaching.
Teachers will transition from being “gatekeepers of knowledge” to “choreographers of learning.” First, they should place greater emphasis on guiding students’ emotions, attitudes, and values. The future new teacher-student relationship will require more emotional and interactive dimensions, where human teachers must learn to coexist with machines and utilize “machine teachers” effectively, requiring greater warmth and empathy to enter students’ inner worlds and transform education into an “art.” Second, they will gradually become knowledge producers, learning facilitators, and growth guides. Teachers will increasingly take on a mentoring role, guiding students to find the right learning objectives, scientific learning methods, and effective learning paths, reminding or constraining them to form self-disciplined learning habits, and providing emotional support for students’ comprehensive practices and social experiences. This allows for the full unfolding of collaboration between human teachers and “machine teachers” based on their respective advantages. The advantages of human teachers primarily include supporting students’ social-emotional abilities, the influence and shaping of their worldview, values, and perspectives on students, and the ability to integrate knowledge across different fields. In contrast, the intelligent technology possesses unique advantages over previous information technologies, capable of addressing the challenges of recognizing learners’ differentiated needs, implicit cognitive obstacles, and accommodating diverse learning paths, thereby making precision teaching achievable. Hence, the advantages of “machine teachers” mainly lie in their vast knowledge reserves, near-infinite computational capabilities, and memory of problem-solving paradigms, treating each student with personalized patience during interactions.
3. Innovating and Exploring Changes in Teaching Models in the Intelligent Era
Understanding the current development of artificial intelligence technology and its impact on education requires in-depth research from the educational community. The current technology is not yet mature enough to be systematically, comprehensively, and accurately applied in teaching. Overemphasizing the application of artificial intelligence technology in the micro-environment of education may be premature. Teachers must first recognize the limitations of current technology. Compared to human intelligence, generative artificial intelligence currently lacks the judgment of “ability boundaries.” For questions it cannot answer, it provides answers based on probabilities, which may often contain erroneous information. Both teachers and students need to use artificial intelligence safely, effectively, and appropriately, and education should prepare each student to make good use of generative artificial intelligence technology or other future technologies. In this context, teachers should focus on guiding students to enhance their understanding and initial application of generative artificial intelligence technology, emphasizing rational judgment when engaging with new technologies.
Furthermore, artificial intelligence education represents a qualitative difference compared to educational informatization. In teaching, the chain of thought dialogue between teachers and generative artificial intelligence differs fundamentally from past experiences with computer-aided instruction and the use of digital educational resource platforms. It involves qualitative differences in educational subjects, resource supply, content production, and interaction methods, representing not merely an improvement in the efficiency of a specific link in the educational process or an enrichment of a specific resource supply, but a systematic leap toward educational digitalization and intelligence based on educational informatization, driving innovation at the fundamental logic of education and better realization of the essence of education. For instance, during the teaching process, teachers can use generative artificial intelligence technology to create necessary graphic stories or videos to conduct exploratory activities, enhancing their instructional design and organizational skills, while not relying on new technology as a primary teaching tool. Artificial intelligence technology serves as a process-oriented path and an important driving force in deepening the transformation of educational digitalization; thus, it is essential to accelerate the transformation and application of the new “five new” system of education in the digital age, which encompasses equitable, inclusive, sustainable, and lifelong educational philosophies, shaping a high-quality personalized lifelong learning system where “everyone learns, everywhere can learn, and learning can happen anytime”; constructing a teaching model centered on data-driven, large-scale personalized instruction; innovating educational content focused on competency and quality; and promoting refined management, precise services, and scientific decision-making in educational governance. By breaking existing path dependencies through intelligent technology, we can systematically empower educational transformation and achieve high-quality educational development.

This article was published in the Journal of Educational Technology, 2024, Issue 8. For reprints, please contact the editorial office of the Journal of Educational Technology (official email: [email protected]).

Please cite as follows: Wang Xuenan, Li Yongzhi. Artificial Intelligence and Educational Transformation. Journal of Educational Technology, 2024, 45(8): 13-21.

Editor: Zhang Rong

Proofreader: Gao Xiaoxu

Reviewer: Guo Jiong

[References]

[1] Nick. A Brief History of Artificial Intelligence. 2nd ed. Beijing: People’s Posts and Telecommunications Publishing House, 2021: 7-57.

[2] Liu Qian, Chen Jianqiang. A New Generation of Artificial Intelligence: Transitioning from “Perceptual Intelligence” to “Cognitive Intelligence.” Guangming Daily, 2021-05-25 (9).

[3] Li F F, Etchemendy J. No, today’s AI isn’t sentient. Here’s how we know. Time, 2024-05-22 [2024-06-22]. https://time.com/collection/time100-voices/6980134/ai-llm-not-sentient/.

[4] Liu Yang, Lin Jing. Multimodal Large Models: A New Generation of Artificial Intelligence Technology Paradigm. Beijing: Electronics Industry Press: 269-285.

[5] Lü Wei. Gathering “Sparks” to Activate Development Momentum: Transcript of the Collective Interview Activity of the Third Session of the 14th National Committee of the Chinese People’s Political Consultative Conference. People’s Political Consultative Daily, 2024-03-11 (5).

[6] Zhu Songchun. The Origin, Evolution, and Trends of Intelligent Disciplines: Exploration and Practice of Intelligent Disciplines at Peking University. University and Discipline, 2022, 3(4): 17-26.

[7] Zeng Yi, Zhang Qian, Zhao Feifei, et al. From Cognitive Brain Simulation to Brain-like Artificial Intelligence. Artificial Intelligence, 2022, 9(6): 28-40.

[8] Patel D, Wong G. GPT-4 architecture, infrastructure, training dataset, costs, vision, moe. SemiAnalysis, 2023-06-10 [2024-06-17]. https://www.semianalysis.com/p/gpt-4-architecture-infrastructure.

[9] Jeff Hawkins, Sandra Blakeslee. The Future of Artificial Intelligence. He Junjie, Li Ruozhi, Yang Qian, trans. Xi’an: Shaanxi Science and Technology Press, 2001: 57-79.

[10] John von Neumann. Computers and the Human Brain. Gan Ziyu, trans. Beijing: Commercial Press, 2011: 2.

[11] Paul Levinson. Digital McLuhan: A Guide to the New Millennium. He Daokuan, trans. Beijing: Social Sciences Academic Press, 2001: 310.

[12] Ye Feisheng, Xu Tongqiang. Outline of Linguistics. 3rd ed. Beijing: Peking University Press, 1997: 7-39.

[13] Polanyi M. The Study of Man. London: Routledge & Kegan Paul, 1957: 12.

[14] Wittgenstein. Tractatus Logico-Philosophicus. Han Linhe, trans. Beijing: Commercial Press, 2013: 30.

[15] Cao Si, Luo Zubing. The Dilemmas, Limitations, and Rational Approaches of Artificial Intelligence Applications in Teaching. Journal of Educational Technology, 2024, 45(4): 88-95.

[16] Kaplan J, McCandlish S, Henighan T, et al. Scaling laws for neural language models. 2020-01-23 [2024-06-28]. https://doi.org/10.48550/arXiv.2001.08361.

[17] Hoffmann J, Borgeaud S, Mensch A, et al. Training compute-optimal large language models. 2022-03-29 [2024-06-20]. https://doi.org/10.48550/arXiv.2203.15556.

[18] Peng Haoxiang. Main Characteristics of Effective Teaching Feedback. Chinese Journal of Education, 2009(4): 54-57.

[19] Chen Xing, Wang Guoguang. The Research History, Theoretical Development, and Technological Shift of International Embodied Learning. Modern Distance Education Research, 2019, 31(6): 78-88, 111.

[20] Zeng Yi. High-end Artificial Intelligence Talents Should Have a Sense of Social Responsibility and Global Vision. China Science Daily, 2023-06-07 (3).

[21] Li Yongzhi. How Education Can Face the Challenges of Artificial Intelligence. China Education Daily, 2024-03-25 (4).

[22] Cabinet Office. AI Strategy 2022. 2022-04-20 [2024-07-02]. https://www8.cao.go.jp/cstp/tougosenryaku/11kai/siryo2_3.pdf.

[23] Andreas Schleicher. Beyond PISA: How to Build a 21st Century School System. Xu Jinjie, trans. Shanghai: Shanghai Education Press, 2018: 18, 25.

[24] Cheng Shangrong. Emotional Education Stimulates Growth Motivation. People’s Daily, 2021-08-22 (5).

[25] Nicholas Carr. The Shallows: What the Internet Is Doing to Our Brains. Liu Chunyi, trans. Beijing: CITIC Press, 2010: 157-159.

[26] Li Y Z. Professional Community and

Leave a Comment