Introduction
The seemingly rapid development and impact of foundational artificial intelligence (AI) models such as ChatGPT 4.0 over the last year has led to calls for developing more robust and comprehensive forms of international AI governance. These calls for AI governance reflect concerns about the risks from these powerful new AI models as well as the goal of expanding access to the economic, scientific, and innovative potential that these AI models can deliver. Currently, international AI governance is being discussed across a range of international forums and organizations including the Group of Seven (G7), the Organisation for Economic Co-operation and Development (OECD), and the United Nations (UN). There are various possible pathways for how international AI governance could develop. Building on existing international forums to develop a distributed and iterative approach to international AI governance is the most practical way of addressing AI governance needs in a timely manner, while also navigating the impact of geopolitical competition with China.
What is foundational AI?
Foundational models include forms of generative AI that are trained on large deep learning neural networks and are able to perform a wide variety of general tasks such as understanding language, generating text, and conversing in natural language. Foundational AI models are also the basis for developing more specialized AI applications, and in this sense are foundational as they increasingly form the building blocks for a range of other AI models and use cases. Foundational models are also increasingly powerful and costly to develop. According to OpenAI, the computational power required for foundation modeling has doubled every 3.4 months since 2012.1
The AI opportunity
Foundational AI could drive a new wave of productivity and economic growth. According to Goldman Sachs, generative AI could raise global gross domestic product by 7% and lift productivity growth by 1.5% over 10 years.2 McKinsey found that generative AI, such as ChatGPT 4.0, could add US$2.6-US$4.4 trillion annually across the 63 use cases it analyzed, with 75% of that value being derived from customer operations, marketing, sales, software engineering, and R&D.3 Where AI improves worker and firm productivity, this should lead to more trade as firms become more competitive.4 For example, AI can help firms analyze data to better forecast demand, optimizing production and logistics, and making more informed decisions about pricing, inventory levels, and market trends.5 AI can be used to optimize the efficiency of global value chains, including inventory management and anticipating changes in demand. Foundational AI models such as PaLM 2 – Google’s generative AI model – has multilingual proficiency and translation capabilities in over 100 languages, reducing language as a barrier to trade.
Use of foundational AI is also likely to transform companies, change business models, and deeply impact jobs and the future of work. The impact of foundational AI on jobs is likely to include some augmentation, where AI supplements and improves human decision-making. Foundational AI will also demand new jobs and skills, while other jobs will be automated. When it comes to automation, it is also possible that foundational AI models will impact more complex white-collar jobs such as clinical laboratory technicians and chemical engineers.6 Yet, the economist David Autor argues that foundational AI could also create new opportunities for lower-skilled workers to move into middle-class jobs, using AI to assist these workers strengthen their decision-making and capacity for judgement.7
Risks of foundational AI
Foundational AI models present a range of significant economic and trade opportunities. However, to be ambitious and realize these upsides will also require addressing the risk of harm from AI. For instance, foundational AI models could lead to more misinformation and disinformation, exacerbating existing challenges from online platforms. Foundational AI may also introduce new risks, such as risks of exclusion where foundational AI is unavailable in some languages. Foundational AI could also increase risks of harm to privacy and security, whether through more effective phishing and targeted manipulation. The black box nature of foundational AI systems makes it difficult to understand how AI models reach particular results, presenting challenges for monitoring and evaluating AI risks, and their compliance with AI regulation.
Domestic AI governance is proceeding apace
AI governance is needed to underpin broad-based uptake of AI by governments, businesses, and households. Since the release of ChatGPT 4.0 in 2022, there has been rapid development of new AI policies and regulation.8 Countries are also developing a diverse approach to AI governance that includes new AI regulation and policies, as well as relying on existing regulation in areas such as product liability, food security, and transportation. For example, the European Union’s AI Act is a horizontal approach to AI regulation that also relies on existing sectoral expertise in EU regulatory agencies. In the US, the Biden administration is focused on giving federal agencies the resources and capacity to regulate AI, use AI to improve their own governance and innovation capacities, and requiring all federal agencies to support and guide AI development through the government procurement process.9
In Asia, a varied approach to AI governance is emerging, with a preference at this stage for AI policy and guidance instead of AI-specific regulation. For example, Indonesia has a National Strategy for Artificial Intelligence 2020-2045, Malaysia has an Artificial Intelligence Roadmap 2021-2025 guides development of Malaysia’s AI capabilities, and Singapore has a national AI strategy focused on building and using AI across government and industry, while also supporting AI research and innovation.10 Japan has no AI-specific regulation but instead relies on existing privacy laws and guidance for companies to implement AI principles and develop contracts on use of AI and data.
Why international AI governance?
Calls for international AI governance need to demonstrate why domestic AI regulation alone cannot be effective. There are at least three key reasons why international AI governance is needed.
- International cooperation is needed to update and develop global AI ethical principles that can guide development of responsible and trustworthy AI in the age of foundational AI models.
- International cooperation is needed to avoid domestic AI regulation becoming barriers to AI development and use.
- The cost of AI computing, including access to the software tools and curated data sets will require international cooperation to make foundational AI globally available and useful for developers and business.
Current landscape of international AI governance
There are a range of international bodies and forums where AI governance is being discussed. Key ones are the G7 with its Hiroshima process, the OECD, the Group of 20 (G20), and various UN bodies, including United Nations Educational, Scientific and Cultural Organization (UNESCO) and the UN Secretary-General’s High-level Advisory Body on AI.11 The Association of Southeast Asian Nations (ASEAN) appears set to deepen cooperation on AI with release of the ASEAN Guide on AI Governance and Ethics, and the successful conclusion of ongoing negotiations on its Digital Economy Framework Agreement (DEFA) could also support international cooperation on AI. Indeed, trade agreements in the Asia-Pacific is another area where international AI governance is being developed.12 For instance, the Digital Economy Partnership Agreement (DEPA) and the Australia-Singapore Digital Economy Agreement include specific AI commitments. Cooperation on AI governance is also being discussed bilaterally, including in the EU-US Trade and Technology Council (TTC)13, between the US and Japan,14 and under the US-India initiative on Critical and Emerging Technology15, to name a few. The Quadrilateral Security Dialogue comprising the US, India, Japan, and Australia, is also a forum for discussing emerging tech including AI. China is advancing its vision for AI governance through the Belt and Road Forum for International Cooperation, the accompanying Digital Silk Road, and Global AI Governance Initiative.
Recently, a number of governments including the UK, the US, Canada, and Japan have announced the establishment of AI Safety Institutes. This list could expand following AI Safety Summits in Korea in May and France later this year. While these are domestically focused, already there is international cooperation being developed among some of the Safety Institutes, including a memorandum of understanding between the US and UK Safety Institutes, and further cooperation amongst AI Safety Institutes is likely.16 These developments underscore another key dynamic when it comes to developing AI governance, namely, the role of domestic AI regulation and policy in driving international AI governance. This bottom-up approach to developing AI governance was also evident last year where the Biden administration’s success in securing voluntary commitments from Big Tech companies on AI safety in July 2023 formed the basis for the G7 Code of Conduct to promote AI safety.17
What’s next for AI governance
Whether there should be a single international organization responsible for AI governance or a more decentralized approach remains up for debate. I, along with my colleagues in the Forum on Cooperation in AI (FCAI), have argued that international AI governance should be iterative and decentralized, rather than being centralized in a multilateral body such as the UN.18 This approach seems a better fit functionally and is more attuned to the realities of today’s geopolitics.
Developing effective international AI governance will also need to grapple with how foundational AI is unlike other previous technologies. AI is a general-purpose technology that is also dual-use with civilian and military applications. As a result, other international governance arrangements such as the International Atomic Energy Agency, for managing risk and harnessing the opportunities of other global transformative technologies such as nuclear power, has limited applicability to foundational AI. The cost of building foundational AI models and its concentration among a few governments and some large private tech companies also present new governance challenges, including access to the information needed to understand the capabilities of these models to address risk, and how to expand access to the benefits. In addition, the pace at which AI model capabilities are developing will at minimum require a very agile and iterative approach to governance that can sidestep much of the bureaucracy and politics that often plague multilateral institutions. The key role of the private sector in developing foundational AI models also underscores the need for a multi-stakeholder approach to governance.
These features of foundational AI point to the need for a range of tailored forms of international cooperation. For instance, addressing risk from foundational models could be addressed with relatively fewer players. In contrast, expanding AI opportunities will likely require broader-based forms of international cooperation. A variable geometry approach to international AI governance is already emerging across the G7, G20, OECD, UN, ASEAN, and bilaterally. A key question then is whether, collectively, these forums are addressing AI governance needs and whether the memberships and forms of participation are appropriate, and, if not, what else is needed to strengthen this iterative and networked approach.
Finally, geopolitical competition between the West and China (but also Russia and Iran) on AI will limit the scope for cooperation in some areas of AI governance. This is why the G20, among others, has made only limited progress so far in advancing international AI governance, and is why the UN and other larger multilateral gatherings will be able to take AI governance only so far. This is not to suggest, however, that finding ways to cooperate with China on AI governance should not be pursued. It clearly should. However, it might be best to start on discrete issues where interest should align, such as finding common ground on the use of AI in command-and-control systems for nuclear weapons. The participation of China in the UK Bletchley AI Safety Summit last year suggests that there may also be ways to expand cooperation with China on addressing AI risk.
***
[1] https://openai.com/index/ai-and-compute
[2] The generative world order: AI. Geopolitics, and power https://www.goldmansachs.com/intelligence/pages/the-generative-world-order-ai-geopolitics-and-power.html
[3] McKinsey, The economic potential of generative AI: The next productivity frontier https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#introduction
[4] Marc J Melitz and Stephen J Redding, “Heterogenous Firms and Trade” 2014, hand of International Economics, 4th Ed. 1-54 (Elsevier), Martin N Bailey, Eric Brynjolfsson, Anton Korinek, “Machines of mind: The case for an AI-powered productivity boom”, Brookings May 10, 2023 https://www.brookings.edu/articles/machines-of-mind-the-case-for-an-ai-powered-productivity-boom/
[5] Joshua P. Meltzer, The Impact of AI on International Trade The impact of artificial intelligence on international trade (brookings.edu)
[6] Eloundou T., Manning S., Mishkin P. & Rock D. (2023). GPTs are GPTs: An early look at the labor market impact potential of large language models. 21 March. Working Paper, Cornell University. Accessed 18 September 2023. Available from URL: 2303.10130.pdf (arxiv.org)
[7] David Autor, AI Could Actually Help Rebuild The Middle Class, Noema Magazine, February 12, 2024
[8] OECD AI Policy Observatory
[9] US Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, October 30, 2023 https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
[10] Singapore’s National AI Strategy https://www.smartnation.gov.sg/nais/
[11] https://www.un.org/techenvoy/ai-advisory-body
[12] Joshua Meltzer, “Towards international Cooperation on Foundational AI: an expanded role for trade agreements and international economic policy, Brookings Report, November 16, 2024
[13] https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/stronger-europe-world/eu-us-trade-and-technology-council_en
[14] https://www.whitehouse.gov/briefing-room/statements-releases/2024/04/10/united-states-japan-joint-leaders-statement/
[15] https://www.whitehouse.gov/briefing-room/statements-releases/2023/01/31/fact-sheet-united-states-and-india-elevate-strategic-partnership-with-the-initiative-on-critical-and-emerging-technology-icet/
[16] https://www.commerce.gov/news/press-releases/2024/04/us-and-uk-announce-partnership-science-ai-safety
[17] https://digital-strategy.ec.europa.eu/en/news/commission-welcomes-g7-leaders-agreement-guiding-principles-and-code-conduct-artificial
[18] Cameron Kerry, Joshua P. Meltzer, Andrea Renda and Andrew W. Wycoff, “Should the UN govern global AI?, Brookings op ed, February 26, 2024 https://www.brookings.edu/articles/should-the-un-govern-global-ai/
© The Hinrich Foundation. See our website Terms and conditions for our copyright and reprint policy. All statements of fact and the views, conclusions and recommendations expressed in this publication are the sole responsibility of the author(s).