Artificial intelligence is a controversial technology. For some, it needs to be regulated and restricted to reduce risk — for others, it’s an invaluable tool that needs to be left to flourish.
This week, we learned that the White House is firmly on the fence in this debate, after it released a report concluding that current evidence is not sufficient to implement restrictions on AI models with “widely available weights” — or in layman’s terms, open-source AI models.
While this is great news for the open-source AI community, and for the democratization of model development, those who wanted the government to impose safeguards on AI development will likely find the decision insufficient.
The White House’s decision comes less than a year after President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence called for expert recommendations on how the risks of open-source models (models with weights publicly available) could be managed.
Key Takeaways
- The White House decides not to implement any restrictions on open-source AI for now.
- This decision is great news for those who want to support open-source AI development.
- Going forwards, this will help open-source AI to compete against proprietary AI.
- Others commentators have criticized the government for adopting a “wait and see” approach in the face of significant risks
AI Regulations: Damned if You Do, Damned if You Don’t
The debate on AI regulation is unforgiving. On one hand, overregulating open-source model development can damage research progress while giving proprietary AI providers more control of the market, and driving innovation over to rival states like China.
On the other hand, as the White House report concedes, limited regulation on the weights of certain foundation models could create risks to national security, safety, and privacy, due to the lack of oversight or accountability.
For instance, an open model will likely have less content moderation than a proprietary model, enabling it to be misused, either to generate misinformation or deep fakes, or even launch automated cyber attacks.
So far, many in the AI industry have received the decision warmly.
Sebastian Gierlinger, VP of Engineering at Storyblok, told Techopedia: “The U.S. government’s position that open-source AI projects will not require new restrictions is broadly welcome.
“The danger of applying too many new rules too quickly is that it will have a chilling effect across the industry which [will] severely constrain adoption of AI and inhibit innovation.”
Gierlinger noted that the announcement should provide a short term boost to the AI sector, but notes that confidence in the technology remains reliant on AI companies acting transparently and ethically.
“A high-profile case of an AI product being misused could lead to a public backlash that will potentially make businesses more reluctant to use consumer-facing AI tools. It would therefore be wise for companies within the AI open-source community to not see the US government’s position as a blank cheque,” Gierlinger said.
The Problem with Restricting Open-Source AI Development
Open-source AI development isn’t out of the water just yet. There’s still lots of risk surrounding the technology, and plenty of anxiety surrounding its adoption. However, Harry Toor, chief of staff at OpenSSF, told Techopedia that restrictions on open-source development rarely work in practice.
“Open-source is a global digital public good. Attempts to impose government-led regulations on open-source often go sideways quickly. The U.S. Government should continue to work with the open-source community and ecosystems to engage in open-source governance models that have been well-tested over the last few decades.”
Active communication between government organizations and the research community is essential for an informed conversation on risk management to take place.
However, Toor suggests that protections for software may be sufficient to govern AI use.
“The industry should be aware that existing regulations may already govern how the industry engages with the AI supply chain since AI is software. Considering the attacks we’ve seen on existing open-source software, securing this supply chain is paramount,” Toor said.
That being said, Toor did note that regulators “may impose” liability standards on how industries consume the AI supply chain to integrate AI-driven components into consumer-delivered products and services.
However, arguably the real challenge of regulating open-source development, is that it’s not clear that it’s fair or desirable to regulate the open-source research community, and not proprietary AI vendors.
All that regulating open-source would achieve is a gentrification of AI development that gives big tech even more control over the technology.
The White House’s Grand Plan: Wait and Do Nothing
It’s important to note that the White House’s decision has attracted a fair share of criticism. After all, even in an issue as sensitive as AI regulation, doing nothing isn’t exactly a good look, particularly after a spate of high-profile PR nightmares like the fake Taylor Swift images and OpenAI’s alleged use of copyrighted content from the New York Times.
Richard Bird, chief security officer at Traceable AI, told Techopedia:
“NTIA’s recent report on open foundation models is both anticlimactic and a bit depressing. After a year of work, the punch line is, “We will keep our eyes open for anything bad that happens.”
“The Executive Branch keeps missing opportunities to be specific in their expectations on ensuring technology meets the needs of our citizens, AND protects their privacy and interests.
“When given an opportunity to make bold statements about demanding that artificial intelligence models eliminate engineering and ingestion bias or that “delete my data” functionality is available to the citizens of this nation, the NTIA has opted to declare that “wait and see” is the best strategy.
“Creating policies after bad things happen is just bad governance and poor leadership,” Bird said.
Looking at the White House’s decision from this perspective, it’s hard to disagree. It does seem like the government is taking a timid stance to AI regulation without a coherent underlying philosophy — perhaps waiting for a crisis to spring into action.
The Bottom Line
Ultimately, the U.S. government will do nothing to restrict open-source AI development, which, although unsatisfying and anticlimactic, is probably the right decision — providing it still actively pursues the insight of the AI research community.
AI has so many ethical considerations and debates that need to be ironed out before regulations are on the table, and an outcome where open-source AI is restricted so that it can’t compete with proprietary development is unacceptable.