Artificial intelligence (AI) has caused such consternation around the world that nations across the economic and technological spectrums are scrambling to figure out how it should be regulated.
No one, not even the technology’s most prominent boosters, is against the idea of regulation. However, the differences of opinion as to how it should be done and to what extent are deep and varied.
Some argue for tight controls on what AI should be allowed to touch, while others prefer a more liberal approach tied to some kind of kill switch (provided someone can invent one).
Driving a wedge into all of this is the widespread myth of an all-knowing, all-powerful AI that will take over the world and wipe out humanity.
If rule-making processes become focused on preventing this highly theoretical possibility, they could very well miss the many ways in which the real-world technology of today could produce a range of lesser, but still detrimental, outcomes in our personal and professional lives.
The Right Approach
A recent report from the Brookings Institution highlighted the three main issues confronting AI regulation, drawn mainly from the back and forth that has been taking place between industry titans and government regulators:
- Velocity – AI development is increasing exponentially, while the regulatory process is exceedingly slow. This makes it highly likely that the rules being contemplated today will be obsolete before they can be implemented;
- Differentiation – AI is likely to be ubiquitous. Should the rules governing its use in national defense be the same for video games? What criteria should be used for each use case, and where should the regulatory lines be drawn for each model, especially those that evolve over time?
- Authorization – Who should be empowered with regulatory authority, and how should that authority be monitored and checked if necessary? Will the first set of rules become the de facto global standard? Will new agencies, with new bureaucracies, be created? Will they have licensing power? What sort of risk assessment will they perform?
Clearly, there is a huge difference between seeing the need for regulations and creating not just the regs but the entire framework to devise, vet, and implement them.
With even the high-tech industry struggling to find the expertise to develop, train, and manage intelligent applications, regulatory bodies have little chance of gaining the kind of in-house knowledge needed to effectively oversee this rapidly evolving field.
The Artificial Intelligence Act
Currently, the European Union seems to be out front on this issue. The AI Act is currently before the European Parliament and is expected to be adopted by the end of the year.
The act lays out the broad concepts of assessing individual AI models’ risk potential regarding consumer safety, privacy, and societal impact.
It also sets transparency requirements and rules governing the use of copyrighted data. It also?sets up a registry for key applications like law enforcement, education, and the management and operation of critical infrastructure.
In the United States, President Biden has proposed an AI Bill of Rights that calls for protections against unsafe or ineffective applications and algorithmic discrimination, as well as rules to ensure data privacy and even an opt-out for those who wish to avoid AI altogether.
The blueprint has no force of law, but it is couched in language that permeates the Bill of Rights in the U.S. Constitution, as well as civil rights legislation that has emerged over time; that is, the right to privacy, freedom of speech, protections against unlawful surveillance and equal opportunities in education, housing, and employment. In this way, it provides a starting point for any laws or regulations that may emerge in the future.
That could be a while in the making, however. At the moment, only two AI-related bills have been floated in the Senate, and only one would impose very narrow regulations on how government agencies use AI. The other would merely promote U.S. competitiveness in future AI developments. Both bills are at the beginning of what is typically a lengthy legislative process.
A Patchwork of Proposals
The situation is much the same in the rest of the world, according to international law firm Taylor Wessing – lots of draft laws and guidelines but little in the way of actual regulatory oversight.
China is probably the most forward-leaning nation at the moment, but even it has only managed to pass one law dealing with algorithmic recommendation tools. Another draft law would target the deep synthesis of Internet-based information services, which would likely be used to combat fake content. At the same time, a more sweeping set of proposed administrative measures would impose a “safety assessment” on generative AI services before they can be released to the public.
For now, the global economy will have to make do with a patchwork approach to AI regulation since there isn’t much momentum for a unified approach. The United Nations held its first meeting on the subject barely a month ago, with calls for the organization to play a central role in developing a multinational framework?but no clear policy initiatives to do so.
The Bottom Line
AI is a complex technology, which means it will require complex regulations to govern its use. That opens the door to a wide range of interpretations as to whether any given rule is helping or hurting the public interest, which itself is not always easy to define.
Fortunately, there are tools out there to help with this effort. In the U.S. state of Massachusetts, legislators recently employed ChatGPT to draft a bill that would regulate the use of generative AI models like ChatGPT.
The bill is currently making its way through various committees whose voting members are all human — at the moment.