Should Government Be Allowed to Regulate AI?

There's lots of talk these days about regulating AI. But is it really a good idea for the government to get involved?

John Edwards, Technology Journalist & Author

April 29, 2024

5 Min Read
Regulation Industry with Robotic Hand Pointing on Black Background
sleepyfellow via Alamy Stock Photo

Governments regulate lots of things. The list is almost endless: radio spectrum allocations, firearms, property use, pharmaceuticals, factory emissions, nuclear power, and on and on and on. In the US, a government leader even talks about regulating food product package sizes. So, should AI be next on the regulation list? Expert opinions vary. 

Rebecca Engrav, co-chair of business law firm Perkins Coie’s artificial Intelligence, machine learning, and robotics practice, believes that it's fair to consider how government regulation will best serve two goals: the risks associated with AI use, and increasing AI's availability and reach for the betterment of society. "That said, the first question should always be whether there's an existing law or regulation that's capable of addressing a particular concern regarding AI," she observes in an email interview. 

Still, steps are already being taken. On March 13, the European Parliament approved the Artificial Intelligence Act, which takes a risk-based approach to ensure companies release products that comply with the law before they're made available to the general public. A day later, under separate legislation, the European Commission asked Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube, and X to show how they are curbing the risks of generative AI. 

Related:Biden Administration Clamps Down on Agencies’ AI Use

Hands Off or On? 

Whether or not AI requires regulating, laws don't work well when they're written for only one technology, Engrav says. "This type of exceptionalism results in laws that become wooden and poorly tuned over time, especially given how rare it is in the United States for laws to be updated." 

Anand S. Rao, service professor of AI at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy, believes that government regulation will eventually be necessary to control AI abuse. "Imagine AI without any form of government oversight," he says. "It's akin to a car hurtling down the road without brakes or a steering wheel -- potentially chaotic and dangerous," he explains via email. "By stepping in, governments can direct AI's trajectory toward societal benefit, ensuring its development aligns with public interest and ethical standards." 

Unregulated AI could amplify existing biases, inflict both physical and psychological harm, and erode public trust in technology, Rao warns. "The challenge for governments is to navigate the fine line between fostering innovation and preventing harm." Such a balancing act, he explains, will require ensuring that regulatory measures don't hinder technological advancement or exacerbate economic inequalities. 

Related:EU AI Act Passes: How CIOs Can Prepare

Engrav notes that the point at which new government AI regulation would make sense is if there's broad agreement on a type of harm that's not remediable through existing regulation, likely to occur on a relatively ongoing basis, is substantial and material, and, perhaps most important, capable of being remediated through a new regulation. 

Yet Arthur "Barney" Maccabe, executive director of the Institute for Computation and Data-Enabled Insight at the University of Arizona, is skeptical that government will ever be capable of creating fair and meaningful AI regulation. "AI's rapid evolution will always outpace the creation of comprehensive regulation, making it nearly impossible for government intervention to keep pace with technological advancements," he explains in an email interview. 

Potential Alternatives 

Maccabe advocates self-regulation. "In cases where industry demonstrates effective self-regulatory practices, the government should endorse industry-led initiatives to regulate AI," he says. "For example, the financial sector has successfully implemented self-regulatory processes through organizations like the National Futures Association, ensuring product development validity." Similar models could be applied to AI regulation. 

Related:Haggling Over the Future of AI Regulation and Responsibility

There are ample incentives for self-regulation, and market dynamics can serve as an effective guiding force, Rao says. "In such scenarios, it's logical for the government to refrain from imposing regulations. Yet a hands-off stance is not advisable when risks associated with AI become substantial and when just a few entities are vying for dominance, potentially compromising the broader values of society for commercial gain." 

Another possibility, Maccabe says, is seeking guidance from relevant professional societies, such as the Association for Computing Machinery (ACM) or the Institute of Electrical and Electronics Engineers (IEEE). "These professional societies have a strong basis in ethics, they are international, nonpartisan, and have the expertise needed to delve into the technical details and potential consequences of regulation." 

An Unnecessary Evil? 

For now, government should leave AI alone, Engrav says. "We don't yet have enough information to be able to predict with confidence that proposed regulations will, in fact, decrease risks … without unduly impairing or impeding the competitive and entrepreneurial environment," she explains. "As for any new laws, the government should only proceed when it has had robust stakeholder engagement with all types of parties who will be affected by the regulation and who speak from a broad range of societal interests." 

In general, government regulation should be seen as a last resort when other forms of regulation are inadequate to address a pressing need, Maccabe says. When government regulation is required, he notes, it's imperative that the process be nimble, transparent, and trustworthy. "The goal is to ensure that AI is developed and deployed to benefit society as a whole while minimizing potential harms." 

About the Author(s)

John Edwards

Technology Journalist & Author

John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights