What Are the Key Guidelines for Developing Secure AI Systems?

Why Trust Techopedia

Last week, the EU and the U.S. released a joint statement to “reaffirm” their commitment to cooperating over regulating AI.

The statement also noted that leaders from the European AI Office and the United States AI Safety Institute had briefed one another on their approaches and mandates.

As we move into an AI world, the question of safety remains paramount. Hence, the call to action that: “The European Union and the United States reaffirm our commitment to a risk-based approach to artificial intelligence and to advancing safe, secure, and trustworthy AI technologies.”

What’s particularly noteworthy about this release is that it comes the same week as the U.S. and the U.K. signed a partnership, with each country laying out plans for a shared approach to AI safety testing.

So let’s explore the international movement to develop key guidelines for developing secure artificial intelligence systems.

Key Takeaways

  • The EU and the U.S. again stress their commitment to regulating AI with a risk-based approach, emphasizing safety, security, and trustworthiness.
  • Meanwhile international collaboration, including partnerships between the U.S. and the U.K., aim to develop guidelines for AI safety testing and regulation.
  • Addressing AI risks involves understanding different perspectives on safety and risk tolerance, with transparency, accountability, and ethics playing crucial roles.
  • While regulations need to add guardrails to AI development, they must not stop innovation — engaging with the AI community is crucial.

Safety as the Foundation of Secure AI Development

The open dialogue between regulators on both sides of the Atlantic highlights key regions like the U.S., U.K. and EU are looking to collaborate to better understand how to mitigate AI risk.

While the conversation is evolving rapidly, it appears that an emphasis on safety is the bedrock principle of AI regulation, and is front and center in the EU AI Act, which classifies AI systems according to the risk posed to users.

Nicole Carignan, VP of strategic cyber AI at Darktrace told Techopedia:

Advertisements

“AI is a fast-evolving set of technologies, and governments are right to take an adaptive approach to managing the risks. To build and sustain public confidence in AI, global leaders must move quickly to deliver their approach and ensure regulators also have the capabilities to deliver.

“Industry leaders should work closely with regulators to strike the right balance between innovation and regulation, but safety should be the highest priority to both.”

But what does AI safety mean? The uncomfortable answer is that the answer to that question lies in the eye of the beholder.

Defining Safety and AI Risk

One of the great challenges in regulating AI is that each person has a different concept of acceptable risk and risk tolerance. For some, the rapid development of AI itself is enough to call for regulation.

Nowhere is this more clearly seen than when tech leaders, including Elon Musk, Apple co-founder Steve Wozniak, and mathematician Emad Mostaqu signed The Future of Life Institute’s open letter in March 2023 calling for a six-month pause on training AI systems more powerful than GPT-4.

The letter read: “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?

“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?

“Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”

Some of the most well-known high-level risks of AI at its current stage of development are mass automation and job loss, the spread of misinformation, the creation of highly realistic deepfake audio/images/videos, automated cyberattacks, how user data is processed and shared, and AI-powered decision making being led by bias and prejudice.

Regulating AI

But how do you decide which risks to regulate? For futurist and generative AI expert Bernard Marr, the answer comes down to transparency, accountability, and ethics.

He told Techopedia:

“I believe there should be a focus on transparency, accountability, and ethical use. Regulations should ensure AI systems are developed with bias mitigation in mind, that there’s clarity on how AI decisions are made, and that individuals’ privacy and data are protected.

 

“Additionally, it is crucial to implement measures to monitor and manage the societal impacts of AI, such as job displacement and security.”

Offering another perspective, Jason Soroko, senior vice president of product at certificate lifecycle management provider Sectigo, suggests that regulatory measures should also be applied to prevent government misuse – similar to how the EU AI Act banned social scoring.

“Regulation for AI should start with named risks, such as agreeing not to use AI to predict guilt in criminal cases. This is an approach used by EU legislation on AI. In other words, the public’s imagination of AI is usually about AI going rogue.

 

“However, we should be more concerned about curbing government usage of AI for use against its citizens.”

Controls in Action: NCSC

One example of AI controls in action can be seen in the UK National Cyber Security Center’s (NCSC) guidelines on secure AI system development.

The NCSC’s recommendations focus on mitigating the cybersecurity risks presented by developing AI systems and recommend that organizations develop controls tailored to each specific stage of development. We’ve included a brief summary of these stages below:

We’ve included a brief summary of these below:

  • Secure design: Guidelines for the design stage of the AI system development lifecycle. This includes understanding basic risks, threat modeling, and considerations for system/model design.
  • Secure development: Guidelines for the development stage of the AI development lifecycle. Includes supply chain security, documentation, asset management,, and technical debt management.
  • Secure deployment: Guidelines that apply to the deployment stage of the AI system development life cycle. Includes protecting infrastructure and models from compromise, developing incident management processes, and ensuring responsible release.
  • Secure operation and maintenance: Guidelines that apply to the secure operation and maintenance stage of the AI system development lifecycle. Includes actions once a system has been deployed, logging, monitoring, update management and information sharing.

Balancing Regulation and Innovation

It’s important to highlight that AI regulations should be limited where possible. If regulations are too prohibitive, they can not only severely damage innovation in the industry and impact civil liberties.

At the same time, there is also a limit to the risks that regulations can address. For example, prohibition in the U.S. between 1920-1933 didn’t stop the sale of alcohol, and anti-piracy legislation has done little to stop individuals from illegally downloading films and music.

In this sense, the goal of AI regulation should be to add guardrails to development without stifling innovation. One way to do this is by engaging heavily with the AI community (not just the top vendors in Silicon Valley).

“To regulate AI without stifling innovation, regulators should adopt a flexible, risk-based approach. This involves setting broad principles that guide ethical AI development rather than overly prescriptive rules that might limit creative solutions. Engaging with the AI community, including developers, researchers, and ethicists, in the regulatory process can also ensure that regulations are informed, and practical, and foster innovation while safeguarding public interests,” Marr said.

A More Limited Approach to Safety: Security

An alternative approach to AI regulation is to focus on addressing security concerns, i.e. developing controls to determine how AI solutions should be secured and outlining best practices for processing and collecting data.

Carignan added, “AI safety is AI security. To achieve AI safety, we must first get better at evaluating AI.

“A key goal of the new partnership is to help companies know that they are doing the right thing and ensure that broader standards and regulations are understood and can be adhered to.

“Securing the use of AI is crucial for organizations — public and private sector alike — to manage AI risks.”

Carignan outlines some basic controls that organizations and regulators could look to to mitigate risk.

“Organizations must secure the data, models, connection points, as well as compute resources used in the development of AI technologies.

“For high-risk AI use cases, transparency, explainability, and privacy pressure should be the highest priorities,’ Carignan continued.

From this perspective, building transparent AI systems comes down to knowing what’s going on under the hood and, where appropriate, communicating that to the end user.

This may begin with simple explainers such as “we collected your data for X reason, and are processing it like this…” — as required under regulations such as the General Data Protection Regulation (GDPR).

Finally, Carignan explains that “data integrity, testing, evaluation, and verification, as well as accuracy benchmarks, are key components in the accurate and effective use of AI. Encouraging diversity of thought in AI teams is also crucial to help combat bias and harmful training and/or output.”

The Bottom Line

At a high level, AI developers need to emphasize transparency and explainability as part of their solutions.

This will help to mitigate some of the high-level risks, making biases more apparent and fixable while also making sure that users know how their data is being processed.

Advertisements

Related Reading

Related Terms

Advertisements
Tim Keary
Technology Specialist
Tim Keary
Technology Specialist

Tim Keary is a freelance technology writer and reporter covering AI, cybersecurity, and enterprise technology. Before joining Techopedia full-time in 2023, his work appeared on VentureBeat, Forbes Advisor, and other notable technology platforms, where he covered the latest trends and innovations in technology.

',a='';if(l){t=t.replace('data-lazy-','');t=t.replace('loading="lazy"','');t=t.replace(/