close
close

US laws governing AI are proving elusive, but there may be hope

US laws governing AI are proving elusive, but there may be hope

Can the US meaningfully regulate AI? It is not at all clear yet. Policymakers have made progress in recent months, but there have also been setbacks, illustrating the challenging nature of laws that impose handrails on the technology.

In March, Tennessee BECOME first state to protect vocal artists from unauthorized AI cloning. This summer, Colorado adopted a tiered, risk-based approach to AI policy. And in September, California Governor Gavin Newsom signed on TENS of AI-related safety bills, some of which require companies to disclose details about them AI training.

But the US still lacks a comparable federal AI policy EU AI Act. Even at the state level, regulation continues to face major roadblocks.

After a protracted battle with special interests, Governor Newsom with veto bill SB 1047a law that would have imposed extensive safety and transparency requirements on companies developing AI. Another California bill targeting social media distributors of AI deepfakes was remained this fall pending the outcome of a trial.

Still, there are reasons for optimism, according to Jessica Newman, co-director of the AI ​​Policy Hub at UC Berkeley. Speaking on a panel on AI governance at TechCrunch Disrupt 2024Newman noted that many federal bills may not have been written with AI in mind, but still apply to AI — such as anti-discrimination and consumer protection legislation.

“We often hear about the US being this kind of ‘Wild West’ compared to what’s going on in the EU,” Newman said, “but I think that’s exaggerated, and the reality is more nuanced than that.”

In Newman’s view, the Federal Trade Commission did forced companies secretly harvesting data to delete their AI models and it is investigating whether AI startups’ sales to big tech companies violate antitrust regulations. Meanwhile, the Federal Communications Commission has declared AI voice robocalls are illegal and proposed a rule that AI-generated content in political advertising would be disclosed.

President Joe Biden has also tried to get some AI rules on the books. About a year ago, Biden signed The AI ​​Executive Order, which supports the voluntary reporting and benchmarking practices that many AI companies were already choosing to implement.

An outgrowth of the executive order was the US AI Safety Institute (AISI), a federal body that studies risks in AI systems. It works within National Institute of Standards and TechnologyAISI has research partnerships with major AI labs such as OpenAI and Anthropic.

However, the AISI could be overturned by a simple repeal of Biden’s executive order. In October, a coalition of more than 60 organizations called on Congress to pass legislation codifying the AISI before the end of the year.

“I think all of us as Americans share an interest in making sure we mitigate the potential downsides of technology,” said AISI Director Elizabeth Kelly, who also participated in the panel.

So is there hope for comprehensive AI regulation in the states? The failure of SB 1047, which Newman described as a “light touch” bill with input from industry, is not exactly encouraging. Authored by California state senator Scott Wiener, SB 1047 was opposed by many in Silicon Valley, including high-profile technologists such as Meta’s chief AI researcher Yann LeCun.

That being the case, Wiener, another member of the Disrupt group, said he wouldn’t have drafted the bill any differently — and he’s confident that broad AI regulation will ultimately prevail.

“I think it set the stage for future endeavors,” he said. “Hopefully we can do something that can bring more people together because the reality that all the big labs have already recognized is that the risks (of AI) are real and we want to test them.”

Indeed, Anthropic last week ADVISED AI catastrophes if governments don’t implement regulations in the next 18 months.

Opponents only redoubled their rhetoric. Last Monday, Khosla Ventures founder Vinod Khosla appointed Wiener “totally ignorant” and “unqualified” to regulate the real dangers of AI. And Microsoft and Andreessen Horowitz launched a statement rally against AI regulations that could affect their financial interests.

Newman posits, however, that the pressure to unify the growing state-by-state patchwork of AI rules will eventually produce a stronger legislative solution. Instead of consensus on a regulatory model, state decision makers have PUT nearly 700 pieces of AI legislation this year alone.

“My view is that companies don’t want a patchwork regulatory environment where every state is different,” she said, “and I think there will be increasing pressure to have something at the federal level that provides more clarity and reduce some of that uncertainty.”