CURRENT EDITION

ARCHIVE

SUBMISSIONS

PODCASTS

Introduction

In the wake of a rapid and evolving AI Revolution, the debate involving harnessing its potential while preventing its dangers is central to understanding the future of this new technology. As AI proliferation continues to invade every sector from healthcare, finance, education, to national security, unprecedented opportunities and challenges also follow. There is an intricate balance between fostering AI's boundless potential and mitigating its risks through regulation.1  The article explores the tightrope policymakers and industry leaders must walk to ensure that the advancement of AI serves to enhance, rather than undermine, our societal, economic, and ethical frameworks.

Artificial Intelligence (AI) is a type of computer engineering, which uses software and machines to perform tasks typically requiring human intelligence.2  Particularly, Generative Pre-trained Transformers (GPTs) such as OpenAI's ChatGPT, Google's Bard, and Meta AI, specializes in understanding and generating human-like texts, images, audios, and videos.

The challenge of balancing AI proliferation and regulation is multifaceted, involving technological, ethical, legal, and geopolitical considerations. Complex issues relating to privacy, security, and international competition, coupled with opportunities for technological breakthroughs present a case for why a balanced approach is not only challenging but necessary. Finding the right balance between AI proliferation and regulation is essential for harnessing AI's potential while minimizing risks and negative consequences.



Why Balance AI Proliferation and Regulation

Promote Innovation

Over-regulation can stifle innovation and progress in AI development, while under-regulation can lead to unethical or harmful applications. The complex state-market dynamic is best exemplified by China's state endeavor to develop its own version of OpenAI, the largest AI company, in its perceived tech race with the United States. Beijing is actively responding to the technology breakthrough brought by ChatGPT by streamlining the legal procedures for domestic big tech companies, such as Baidu, Tencent, Alibaba, Huawei, and Xiaomi, to develop and launch their chatbots. For example, Beijing approved fourteen new Large Language Models (LLMs) for public use within one week in January 2024.3  At the same time, in July 2023, China's Cyberspace Administration issued one of the world's first legal regulations on generative AI, the "Temporary Measures on the Regulation of Generative AI Services,"4   in which a "negative list" was given to the private sector to specify what they should steer clear of. Even though the need for Chinese AI systems to conform to China's "socialist core values" is considered a hindrance to innovation, GPT-powered chatbots still flourish in the Chinese market, as seen in the cases of Baidu's Ernie Bot and Tencent's Minimax.5

Ensure Safety and Accountability

Over-regulation can hinder the deployment of beneficial AI solutions. However, regulation helps ensure AI systems are safe, transparent, and accountable, particularly when a rising amount of misinformation and cybercrime involving AI, such as deepfake, become prevalent to the point of impacting elections and stock markets.6  Leading in regulating data privacy and transparency, Europe published the world's first regulation on artificial intelligence in June 2023.7  The risk level classification of AI systems proposed in the EU AI Act helps shed light on some of the safety concerns of unregulated AI proliferation. For example, facial recognition is listed in the categories identified as an "unacceptable risk" that is subject to prohibition, consistent with the bloc's policy preference of individual privacy protection. AI applications in critical infrastructure, law enforcement, and border control management among others, on the other hand, are categorized as having "high risk" due to national security and accountability concerns.


Address Ethical Concerns

Regulations must address ethical issues in AI technology like bias, privacy, and autonomy, as the data models are known to scale existing racism and sexism due to the research and data gaps in training the algorithms.8  Efforts to have transparent AI models have extended beyond ethical considerations to address algorithmic biases, highlighting the ethical implications of biased algorithms and the need for inclusive AI design. The nature of AI models is very dependent on big data—large, diverse sets of information that grow at an ever-increasing rate. Using this data, the models can produce classification that can imitate independent decision-making. However, "classification tends to reinforce inequalities, such as inequalities arising from facial recognitions used in public safety, which in turn discriminate against black people" due to unfairness in the algorithm design.9  Technologies are in the hands of the profit-driven private sector. Therefore, it is important for the public sector to bridge the missing gaps between economically profitable and socially beneficial contents.

Rapid Development of the Technology

AI advances rapidly, making it hard for lawmakers to stay informed about the technology and for the lawmaking process to keep pace with the technological advancement. In the United States, the private sector controls and leads the innovation of technologies and the lawmakers chase after it. On the other hand, the Chinese government is proactively leading the development while concurrently putting guardrails. China's state power in controlling the private sector and the flow of information guarantee that the AI technology will not grow out of "control." At the same time, many of the policy purposes of the Chinese government emphasize on maintaining social and political stability which is ultimately aiming at strengthening the power of the Party and could show inconsistencies during implementation.10  The common saying "the US innovates, China imitates, and Europe regulates"11  reflects the different approaches taken by the regions. While this is seen as a strategic move of comparative advantage, it also shows how difficult it is to lead in innovation and regulation at the same time. Rapid evolution of AI means a country needs to prioritize if it wants to be an innovator or a regulator.

Global Consistency and Cooperation

AI is a global phenomenon, requiring international cooperation and consistency in regulations, which is difficult to achieve due to problems of attribution on a globalized technology and geopolitics. While the issue of attribution requires further brainstorming from policymakers, the first step of overcoming geopolitical division in the realm of AI regulation might not be entirely unfeasible. To date, the EU, the US, and China have all issued their strategies on AI regulations. Despite differences in detailed regulations and purposes, many areas of concern are overlapping in the minds of all three countries' policymakers. This might provide a space for cooperation.

AI technology is not only transforming societies and economies but also challenging the old political paradigm that sets the states and markets against each other.12  Modern technologies have revolutionized the way states and non-state actors interact with each other in that both sides become increasingly interwoven, thus harder for any side to create a one-sided influence. It paves the way for the new generation of policymakers to come up with creative ways to regulate AI that are not only about giving sticks. There is space for both the state and the private sector to benefit from a balanced approach to proliferation and innovation.



How to Balance AI Proliferation and Regulation

Develop Adaptive Regulatory Frameworks

Regulations should be flexible enough to adapt to new developments in AI technology while ensuring ethical principles are upheld. It is crucial to establish regulatory bodies or committees comprising experts from various fields such as technology, ethics, law, and sociology. Such bodies can continuously monitor advancements in AI and assess their implications on society, updating regulations accordingly. Additionally, employing tools like sandbox environments,13  which was introduced in the EU's AI Act, can allow regulators to test new AI applications in controlled settings before deployment, ensuring compliance with evolving standards.14

Promote International Collaboration

Establishing international norms and agreements on AI ethics, privacy, and security can help manage global risks. The need for multilateral forums and agreements focused on AI governance is critical as the technology continues to evolve. Collaborative efforts can include information sharing, joint research initiatives, and the establishment of common standards to address cross-border challenges related to AI. Last year, the UK held the first global summit on AI safety. Fostering these exchanges which brings together AI researchers and policymakers can promote understanding and alignment of diverse perspectives on AI ethics and regulation.15  Moreover, international conferences held by the Chinese Association for Artificial Intelligence (CAII) called for more cooperation between researchers, practitioners, scientists, students, and engineers in AI and its affiliated disciplines.16  Similar efforts from international organizations such as UNESCO could garner more coalitions for further cooperation.

Foster Public-Private Partnerships

Collaboration between governments, industry, and academia can facilitate the development of AI technologies that are both innovative and socially responsible. Creating structured mechanisms for collaboration among stakeholders, for instance, governments can offer incentives such as tax breaks or research grants to encourage companies to prioritize ethical considerations in AI development. Additionally, establishing joint research centers or consortiums where industry experts, government officials, and academics work together can facilitate knowledge sharing and accelerate the development of ethical AI solutions. The Biden-Harris administration announced the first-ever consortium dedicated to AI safety controlled by the National Institute of Standards and Technology (NIST).17  Efforts like these are vital to ensure regulation does not stifle innovation.

Encourage Ethical AI Development

Frameworks and guidelines for ethical AI development, including considerations of fairness, transparency, and accountability, can guide responsible innovation. It is essential to integrate ethical considerations into the entire AI lifecycle effectively. This includes incorporating ethics training into computer science and engineering curricula, promoting interdisciplinary research that combines technical expertise with ethical frameworks, and incentivizing companies to adopt ethical AI principles through certification programs or procurement preferences.18  Moreover, establishing mechanisms for independent auditing and certification of AI systems can provide assurance to stakeholders regarding their ethical compliance.

Invest in AI Literacy and Workforce Development

Preparing the workforce for the AI-driven economy and promoting public understanding of AI can help mitigate economic and social impacts. For example, integrating AI literacy modules into school curricula can equip future generations with the necessary knowledge to interact responsibly with AI technologies. At the start of the year 2024, Congress held a hearing titled 'Toward an AI-Ready Workforce' aimed at designing comprehensive education for an AI workforce.19  Moreover, providing upskilling and reskilling opportunities can help existing workers adapt to the changing demands of the AI-driven economy. Additionally, public awareness campaigns can demystify AI technologies, dispel misconceptions, and foster informed discussions about their societal implications.20



Conclusion

The rapid advancement of AI presents a transformative shift in various sectors, accompanied by unparalleled opportunities and challenges. As AI continues to permeate every aspect of society, the imperative to strike a balance between fostering innovation and mitigating risks through regulation becomes increasingly evident. The multifaceted challenge of balancing AI proliferation and regulation encompasses technological, ethical, legal, and geopolitical considerations. From promoting innovation to ensuring safety and accountability, addressing ethical concerns, and keeping pace with rapid technological development, a nuanced approach is essential. Moreover, fostering global consistency and cooperation in AI regulation is imperative given its global nature. International collaboration, adaptive regulatory frameworks, public-private partnerships, and investments in AI literacy and workforce development emerge as key strategies to navigate this complex landscape effectively. Ultimately, a balanced approach to AI proliferation and regulation is not only necessary but also challenging in harnessing the full potential of AI while safeguarding societal, economic, and ethical frameworks. Through concerted efforts across governments, industries, academia, and civil society, we can harness the potential of AI to drive positive societal impact while addressing challenges and safeguarding against potential risks.



About the Authors

Benedicta (Benie) Kwarteng is a Master in International Relations student at Johns Hopkins University SAIS, focusing on Technology and Innovation with a regional specialization in Asia. Benie is deeply engaged in exploring the intersections of international development, security, and technology policy. Prior to pursuing her master's degree, she worked as a research fellow at the Congressional Research Services within the Foreign Affairs, Defense, and Trade Division. Her research interests include the informal economy, US-China-Taiwan relations, US foreign policy, and emerging technologies.

Anita Jing-Shin Lin is a Master in International Relations student at Johns Hopkins University SAIS. She obtained bachelor's degrees in philosophy from National Taiwan University in Taipei, her hometown, and Political Science from Freie Universität Berlin (FU). She worked as a journalist in Taiwan and as an editorial student assistant at the Max Planck Institute for the History of Science in Berlin, before joining the China Studies faculty at FU as a research assistant for the department chair. She interned at the Mercator Institute for China Studies, Europe's largest China-focused think tank. Anita is interested in China's industrial policies on high tech, particularly semiconductors.


Footnotes

Download footnotes here.


ALSO IN THE 2024 ISSUE
 
War and Democracy
Examining Ukraine's Judiciary Under Martial Law
George Kent
 
Navigating Risk
China's Aircraft Carrier Strategy
Aina Turillazzi
 
The Establishment of the African Medicines Agency
A Reflection of African Regional Integration Efforts
Josefine Petrick
 
Opportunities for Economic Policy
Aging within Global Labor Markets
Noah Yosif & Max Aldrich
 
Everyone Disliked That
Russian Threats and the Global Nuclear Order
Jack Kennedy
 
The Rationale for an Independent UN Military Culture
Caroline Klaff
 
The Trans-Pacific Narcotics Pipeline
Analyzing the Interconnectedness of Organized Crime in the Indo-Pacific and the Subsequent Destabilization of Pacific Periphery Nations
Moxie Thompson
 
Nature Finance
An Opportunity to Drive Economic Resilience and Climate Action
Simone Weichenrieder
 
Imposing Power on the Ivory Tower
Tactics of State Takeover at New College of Florida and Boğaziči University
Jacob Wentz