The Biden administration has set its sights on the future of artificial intelligence (AI), initiating the rollout of new regulations as part of a comprehensive executive order.
This strategic move aims to usher in a new era of AI oversight, yet it has sparked a debate among experts over its potential efficacy and impact.
Critique of AI regulation’s focus on model size and compliance burdens
Jake Denton, a research associate at the Heritage Foundation’s Tech Policy Center, critiqued the focus of these regulations, stating, “The executive order’s preoccupation with model size and computing power, rather than actual use case, is misguided.”
“This approach risks creating compliance burdens for companies without meaningfully improving accountability or transparency.”
The executive order mandates that AI developers disclose safety test results to the government, marking a significant step in the administration’s plan to ensure the safety of AI systems before their public release.
AI regulation: White House sets bar amid enforcement concerns
Ben Buchanan, the White House special adviser on AI, emphasized the importance of this initiative, saying, “The president has been very clear that companies need to meet that bar.”
Despite the administration’s intentions, some experts express skepticism about the practical outcomes of these new rules.
Denton pointed out the potential challenges in enforcement and oversight, noting, “The order’s blurred lines and loosely defined reporting requirements will likely yield selective, inconsistent enforcement.”
Tech regulation concerns: Seeking collaboration over dictation
This sentiment is echoed by Christopher Alexander, chief analytics officer of Pioneer Development Group, who raised concerns about the government’s ability to regulate tech industries effectively and the implications for censorship.
Alexander highlighted the need for a collaborative approach to regulation, underscoring the importance of industry input in crafting meaningful standards.
He remarked, “The Biden administration’s problematic regulation of crypto is a perfect example of government dictating to industry rather than working with industry for proper regulations.”
Siegel draws parallels between AI safety standards and drug approvals
Despite these concerns, there is a consensus on establishing safety standards for AI.
Phil Siegel, founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), compared the process to drug approval regulations, suggesting that similar mechanisms will evolve for testing AI models.
Siegel believes that the executive order, while not immediately impactful, lays the groundwork for future safety testing protocols, including ‘red teaming’ exercises designed to challenge and improve AI systems.
Biden’s attempts to balance AI regulation efforts and innovation
The Biden administration’s efforts to regulate AI come at a pivotal moment, as noted by Ziven Havens, policy director at the Bull Moose Project.
Havens emphasized the delicate balance between ensuring safety and fostering innovation, stating, “If the Biden administration aims to be successful with AI regulation, they will use the information provided to them to create reasonable standards, ones that will both protect consumers and the ability of companies to innovate.”
The overarching goal is to maintain America’s leading position in AI technology without hampering its growth with overly restrictive regulations.
AI regulation: Biden’s challenge to balance safety and innovation
Havens warned of the consequences of failing to strike this balance, suggesting that it could lead to a decline in the United States’ global technological and economic leadership.
As the Biden administration embarks on this regulatory journey, the challenge will be to implement rules that ensure AI safety and transparency without stifling the innovation that drives the industry forward.
The success of these regulations will depend on the administration’s ability to engage with industry experts, address concerns about enforcement and censorship, and develop a framework that supports both the growth and responsible use of AI technology.
Victoria Mangelli graduated Summa Cum Laude with her BA in journalism from Siena College. She has worked for the Megyn Kelly Show, The Borgen Project, Saratoga Living, as well as several other publications. She enjoys traveling in her free time while freelancing for national publications.