Exploring Global AI Trends at GFEAI2025:
Thailand Emerges as Regional Leader in AI Ethics
Bangkok has captured the world’s attention as it hosts the 3rd UNESCO Global Forum on the Ethics of AI 2025 (GFEAI2025), a historic gathering aimed at collective reflection and action. The forum raises fundamental questions that echo global concerns: “As AI advances faster than regulation, where should proper ethics lie? And who will guide its direction?” This is not merely a venue for policy-level knowledge exchange; it is a landmark assembly of ministers, national representatives, and experts from 88 countries, united in their pursuit of solutions to balance groundbreaking innovation with responsibility to humanity. This article takes a deep dive into the global AI trends emerging from the forum, and highlights the key outcomes driven by world leaders—alongside Thailand’s prominent role as a driving force—at GFEAI2025.
‘Prime Minister and UNESCO Advance AIGPC, Championing ‘Human-Centered AI Ethics’

In a packed plenary hall, delegates from around the world echoed a shared conviction: the development of AI must not be driven by mere capability, but by a critical question—will AI truly enhance the quality of life for all groups in society? As the host nation, Thailand demonstrated strong and unequivocal commitment. Prime Minister Paetongtarn Shinawatra declared that Thailand aims to become a leader and regional hub for AI ethics. This ambition is being advanced through the National Strategy on Artificial Intelligence under the National AI Committee (NAIC), established by the government to drive AI development and accelerate implementation of the national strategy. With a goal of generating over 4 billion baht in value, the strategy includes investment in infrastructure and capacity-building to strengthen both human capital and the AI ecosystem. The vision is clear: to harness AI as a transformative force for improving lives across society. On the same stage, Ms. Audrey Azoulay, Director-General of UNESCO, reaffirmed the importance of the UNESCO Recommendation on the Ethics of Artificial Intelligence, endorsed unanimously by 193 Member States. She emphasized that this global agreement should be translated into national policies—serving as a key framework for all countries, especially developing nations, to govern AI in a concrete, human rights–centered, and inclusive manner that leaves no one behind.

Following this, the Prime Minister held a bilateral discussion with UNESCO Director-General Audrey Azoulay to initiate the establishment of the AI Governance Practice Center (AIGPC)—a regional Category 2 Centre under the auspices of UNESCO. The AIGPC will serve as a hub for advancing the implementation of the UNESCO Recommendation on the Ethics of Artificial Intelligence, aiming to help countries across the Asia-Pacific adopt ethical AI practices and address emerging threats such as disinformation.
The Center will also function as a regional knowledge base, offering training programs and promoting AI standards. It will accelerate tangible AI development in key sectors including agriculture, healthcare, education, and the fight against cybercrime. UNESCO will support the initiative by providing expertise and technological cooperation.
At the national level, Thailand is committed to fast-tracking AI-related efforts—enhancing Thai digital skills, attracting future-facing industries, establishing national data centers, and integrating AI education into formal curricula. The overarching goal is clear: to ensure that AI becomes a force for improving quality of life, reducing inequality, and delivering meaningful benefits for all citizens.
How Does the World View AI? Key Global Trends to Watch from GFEAI2025
One of the most prominent themes emerging from GFEAI2025 is the growing global focus on policy, ethics, and governance. This year’s forum showcased a powerful spirit of collaboration among countries and organizations—despite differences in economic status, social context, and available resources, all were united by a common goal: to build AI that leaves no one behind. There is a growing awareness among nations that the critical question today is no longer simply “Do we have AI?”, but rather, “How ready are we to use AI ethically?” This marks a shift toward inclusive, responsible innovation that prioritizes the well-being of all people.

Several UNESCO Member States, including Thailand, have begun adopting the UNESCO Readiness Assessment Methodology (RAM) as a guiding framework to analyze their strengths and weaknesses in AI readiness and to inform policy development. Despite their technological advancement, developed countries like the Netherlands acknowledge ongoing challenges—particularly in ensuring inclusive public participation and effective coordination among agencies. In response, UNESCO, in collaboration with the European Commission and the Dutch Authority for Digital Infrastructure, has launched the Global Network of AI Supervising Authorities. This network aims to foster international cooperation in developing AI policies and regulatory frameworks that align with global standards, ensuring that AI systems are trustworthy, transparent, and fair. A shared proposal has emerged from the forum, emphasizing that sound AI policies must be adaptable to rapid technological change, inclusive of voices across all sectors of society—especially those often overlooked, such as children, persons with disabilities, and marginalized groups—and equipped with mechanisms for continuous monitoring and evaluation. Beyond policy alone, the world is now actively fostering a culture of responsible AI, grounded in the belief that AI can no longer be developed under the outdated principle of “move fast and break things.” Instead, the call is to “move fast and save things,” embedding ethics from the outset and integrating ethical principles into every stage of AI design and development.

Major global private-sector companies—including Microsoft, SAP, Infosys, LG, and Universal Music Group concurred that “human-in-the-loop” is the cornerstone of every AI system, as critical decisions must ultimately remain in human hands. These companies are actively working to eliminate bias in datasets, train employees to understand how AI impacts job roles, and protect intellectual property. They also proposed the introduction of economic incentives for companies that develop AI ethically, so that doing good is not only the right thing but also a sound business strategy.
While many countries have yet to establish clear legal frameworks for AI, there is growing consensus on the urgent need to complement these efforts with robust laws on data protection, cybersecurity, and comprehensive AI regulations that safeguard rights, safety, and fairness in alignment with ethical standards. The concept of the AI Sandbox was also highlighted as a crucial tool for fostering mutual understanding between technology developers and regulators. More than just a testing ground for innovation, the sandbox represents a space for co-learning and collaborative governance, governed by new rules designed to meet emerging challenges.
Shifting focus to human rights and inclusion, particularly the issue of children and AI, experts from UNICEF and several countries raised strong concerns: children are becoming the silent victims of AI—long before they are even born. Parents use AI to monitor pregnancies; newborns are exposed to AI-curated content; school-age children rely on AI for homework. Despite AI becoming a daily presence, few have asked the fundamental question: Is AI truly designed with children in mind?
While AI undoubtedly improves access to information, there are serious concerns that it may undermine children’s critical thinking, curiosity, and capacity for independent learning. Even more alarming, AI has become a new gateway through which children can unknowingly fall prey to deepfakes, explicit content, and online exploitation. As such, the global community strongly agrees that promoting AI literacy for children and families, and crafting child-centered AI policies from the outset, is essential to safeguarding future generations in the age of artificial intelligence.

In addition, many countries emphasized that AI must not only be intelligent, but also respect human rights and create opportunities for all groups particularly persons with disabilities and vulnerable populations. While AI may offer convenience for the general public, for these groups it can be transformative turning the impossible into possible and empowering those long excluded. Assistive technologies such as screen readers, real-time translation, and navigation tools are examples of how AI can enhance accessibility. The forum proposed that AI must be “inclusive by design,” meaning inclusivity should be embedded from day one, rather than retrofitted as optional features. Persons with disabilities must also be actively involved in AI development—not merely as users, but as co-creators and decision-makers. This approach aligns with the United Nations Convention on the Rights of Persons with Disabilities (CRPD), which mandates the full participation of persons with disabilities in shaping policies and systems that affect their lives. It also echoes UNESCO’s guiding principle that no one should be left behind from policy to development and deployment. The forum recommended three concrete actions to close the gap: investing in public-interest AI that is accessible to all; enforcing clear legal frameworks on rights and accessibility; and establishing regulatory bodies that include persons with disabilities in rule-setting processes.
While many countries have begun applying UNESCO’s RAM framework, some developing nations face limitations due to a lack of specialized personnel to conduct in-depth analysis leaving assessments incomplete or confined to paper. In contrast, Saudi Arabia has launched the “Women Elevate” program to train 25,000 women in AI, aiming to close the gender gap in the tech sector. In Thailand, the Electronic Transactions Development Agency (ETDA), under the Ministry of Digital Economy and Society, is advancing the work of the AI Governance Center (AIGC) by developing readiness assessment tools and training curricula for agencies nationwide—building capacities that go beyond basic technological adoption to include ethical AI understanding.
Meanwhile, privacy and transparency were also highlighted as critical concerns. As the most sensitive forms of personal data such as neurodata and day-to-day digital life records become new targets for technological intrusion, the ethical implications grow. Through innovations like Neuro-AI, which merges neurotechnology with artificial intelligence, AI systems can now interpret brain activity: from understanding human thoughts and supporting mental health diagnosis, to aiding treatment for neurological disorders. While such technologies may revolutionize healthcare offering hope to patients with paralysis or brain conditions, they also pose profound risks. Without robust ethical safeguards, brain data reflecting thoughts, cognitive abilities, and mental states—could be misused for manipulation or control.
To mitigate these risks, the forum recommended several safeguards: ensuring anonymized data collection, restricting brain data entry into digital systems, adopting sunsetting clauses in legislation (laws with built-in expiration dates to allow for technological adaptation), and applying dynamic regulation that is flexible and responsive to change. These approaches were among the most urgent and forward-looking proposals to emerge from the forum.

The issue of AI and online fraud has emerged as a serious concern for many countries, including Thailand. From call center scams to cybercriminals using AI-generated deepfake audio and video to deceive the public, the threats are growing in both scale and complexity. Criminals increasingly exploit digital platforms to commit fraud, and many of these operations are based across borders—where varying legal systems and jurisdictions hinder effective prosecution and law enforcement. However, AI is also proving to be a powerful tool for governments in the fight against online deception. It can be deployed to detect fraudulent patterns, analyze the behavior of scam call operations, and scan for fake news with speed and accuracy far beyond human capabilities. This duality highlights the urgent need for global cooperation and technological safeguards, as AI becomes both a challenge and a solution in the battle against cybercrime.

In the case of Thailand, Professor Wisit Wisitsora-at, Permanent Secretary of the Ministry of Digital Economy and Society, revealed that the country has implemented a proactive and sustained approach to tackling online threats. This includes the establishment of a dedicated task force designed to respond to cyber threats with speed and agility, and the deployment of data-sharing technologies between banks and government agencies to disrupt the financial channels of criminal networks. Thailand also utilizes the Traffic Light Protocol (TLP) to manage information-sharing rights and prevent the leakage of sensitive data. Furthermore, AI systems are used to screen and detect over 3,000 instances of fake news per day, helping to counter online misinformation at scale. This integrated approach has drawn interest from several ASEAN countries as a potential model for replication, serving as compelling evidence of Thailand’s serious and coordinated efforts to combat cybercrime and digital fraud.

Another issue gaining increasing global attention is the environmental impact of AI. While many view AI as a tool to help analyze climate change trends or manage forest ecosystems, this forum underscored a critical concern: training large-scale AI models consumes enormous amounts of energy, quietly becoming a significant source of carbon emissions contradicting global environmental goals. It is projected that by 2026, electricity consumption from AI data centers will reach 1,000 terawatt-hours, equivalent to the combined power demand of Japan and Germany. As such, the forum emphasized the urgent need to develop “Green AI” AI systems designed with environmental sustainability in mind, considering the full lifecycle of technology, including energy use, water consumption, mineral extraction, and e-waste management. Simultaneously, there is a strong call to establish legally enforceable frameworks, such as embedding environmental responsibilities into AI regulations, applying energy laws to govern infrastructure, and expanding ESG standards to include technology—forming what some refer to as ESGT (Environmental, Social, Governance, and Technology). These measures aim to ensure shared accountability across all sectors.
Many countries acknowledged that while they have begun investing in AI, they still lack the legal structures, institutional support, or skilled personnel truly equipped to manage it. In this context, the UNESCO Readiness Assessment Methodology (RAM) was once again recognized as a crucial “compass” promoted at GFEAI2025—encouraging each country to assess its own strengths and weaknesses in order to design AI policies that leave no one behind and uphold human rights throughout.
GFEAI2025: A New Chapter for AI in the World and Thailand
Everything that unfolded at GFEAI2025 serves as the clearest warning yet: the world must urgently define the direction of AI—before AI begins to define the future for us. No country can tackle this challenge alone. This forum marks the beginning of a new era of global collaboration and shared understanding, grounded in the recognition that the development of AI must move forward together, guided by ethical principles embraced by all nations.

For Thailand, this is not merely an opportunity to demonstrate its preparedness in terms of legal frameworks, infrastructure, and international cooperation, but a significant milestone that reinforces its role as the regional center for AI ethics. Through the establishment of the AI Governance Practice Center (AIGPC), Thailand has affirmed its readiness to play a leading role in shaping ethical standards for artificial intelligence standards that uphold transparency, fairness, and respect for human rights. This initiative reflects Thailand’s commitment to working in concert with the international community to ensure that AI is developed equitably and sustainably, for the benefit of all humankind. Further updates and official outcomes from the forum can be followed via ETDA Thailand’s official page.
ETDA Corporate Communications Team
Email: prteam@etda.or.th
Rattanaporn (Ing-on) +66 95 506 4114
Krit (Kritchy) +66 84 186 4828
Thitiya (Mod) +66 84 206 9669

