Angst over artificial intelligence (AI) – most of it rational, some ludicrous – is naturally fueling calls for increased regulation and even moratoriums on further development to ensure we do not unleash forces we can’t control.
However, this is likely to be a challenging endeavor given that the full capabilities of AI are still unknown, and there are many competing outlooks and opinions as to what benefits it offers and what kinds of threats it poses.
Complicating matters even further is that any regulatory framework is only as extensive as the regulatory authority’s jurisdiction allows, which could result in AI being developed under looser scrutiny and breaking out into the broader digital ecosystem.
Calls for AI Control
This has prompted some voices in government and business to call for global regulations on AI development and usage. As yet, there has been no serious effort to embark on such a complex undertaking, but as AI makes its way into the digital mainstream and nations begin to craft their own limitations on the technology, we can expect further momentum toward a global solution.
Earlier this month, in fact, Ursula von der Leyen, president of the European Commission, called on the European Union to take the lead in developing a global regulatory regime for AI similar to the Intergovernmental Panel on Climate Change. The goal would be to foster safe and responsible AI development by pulling together the best minds in government, commerce, science, and other circles.
This was followed by U.S. President Joseph Biden’s address to the United Nations last week in which he pledged to work with world leaders to “ensure we harness the power of artificial intelligence for good while protecting our citizens from this most profound risk.”
Neither of these leaders speak for the entire world, however, so it is unclear how global their preferred regulatory frameworks would be if they were to come to fruition. The only body that lays any claim to worldwide gravitas is the United Nations, and its efforts at taming AI globally are barely off the ground.
A UN Proposal
Earlier this year, the UN Education, Scientific and Education Organization (UNESCO) called on all countries of the world to fully implement its Recommendation on the Ethics of Artificial Intelligence, which was approved unanimously by all member states in 2021.
The framework lists a broad range of values and principles to help guide the development and implementation of AI, but it also provides a Readiness Assessment tool to help regulators determine if users have the skills and competence to properly utilize the AI-driven resources at their disposal. It also calls for periodic reporting by regulatory authorities detailing progress in their state’s governance of AI.
Regulations of any kind and the agencies that enforce them tend to draw a lot of criticism – mainly from those being regulated. And while there are many examples of regulations gone amok (mattresses with labels saying “Do not remove this tag under penalty of law”), it is fair to say that our world would be much less pleasant without rules regarding things like clean air, clean water and safe handling of food and other commodities.
Regulatory Model
When contemplating the global enforcement of AI, then, are there any precedents to help guide this effort? One potential template is the International Civil Aviation Organization, says Roman Perkowski of telecommunications services provider TS2. Operating under the aegis of the UN since 1944, the ICAO oversees standards, practices, and policies that allow nations to share airways and otherwise coordinate air traffic operations to everyone’s mutual benefit. A key aspect of its mandate is the co-development of regulations and procedures among individual air authorities to ensure they do not work at cross purposes or jeopardize flight operations that span international boundaries.
This is a challenging task, to be sure, with many competing goals and perspectives, but the idea of a global clearinghouse to help align individual regulatory efforts between numerous nations would be an excellent place to start for AI. The question remains, however, is there enough national self-interest in creating a communal environment for intelligent technologies like there is for air travel?
Complicating matters even further is that there is still no clear consensus on how AI should be regulated. On the one hand, we don’t want it doing things detrimental to the public, either on its own or in the direction of bad actors. Conversely, we don’t want to stifle creative development and diminish AI’s utility.
In a recent article on The Conversation, Stan Karanasios, associate professor at the University of Queensland, Olga Kokshagina, associate professor at France’s école des Hautes Etudes Commerciales du Nord (School of Higher Education in Business Administration from the North), and Pauline C. Reinecke, assistant researcher at the University of Hamburg, note that leading AI developers and practitioners are calling for governments to regulate the technology, and do so in a coordinated fashion. This is a good sign, but if it were to actually happen, would these industry titans support measures intended to serve the public interest, or will they seek to bend the rules toward their own self-interests?
The Bottom Line
Perhaps the most fundamental aspect of AI that inhibits any form of regulation is its evolving speed. We are still far from even the most rudimentary rules nationally, let alone a global framework. By the time we get one, the technology is likely to be functioning in ways that are still conceptual at this point. That is the nature of laws and regulations.
This means we will likely experience a free-for-all in the AI industry for the time being, trusting the wisdom and goodwill of scientists and business leaders to keep us all safe.