The U.S. AI Safety Institute (USAISI)

Why Trust Techopedia

What is the U.S. Artificial Intelligence Safety Institute (USAISI)?

The U.S. Artificial Intelligence Safety Institute (USAISI) is an initiative by the United States federal government to address artificial intelligence (AI) safety and trust.

Advertisements

The Institute, which was established by the Department of Commerce through the National Institute of Standards and Technology (NIST), will focus on establishing guidelines and metrics for safe and trustworthy AI. A related consortium called the AI Safety Institute Consortium will help develop tools to measure and improve AI safety and trustworthiness.

Techopedia Explains

USAISI and its consortium are part of NIST’s response to the Biden-Harris Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI.

President Biden’s executive order has tasked NIST with a number of responsibilities, including:

What is the AI Safety Institute Consortium?

The AI Safety Institute Consortium will bring technology companies, other government agencies, and non-profit organizations together to identify reliable, adaptable, and compatible tools for measuring AI trustworthiness and safety.

The Consortium will be responsible for developing new guidelines, protocols, and best practices to facilitate the establishment of industry standards that can be used to develop and deploy AI in safe, secure, and trustworthy ways. This includes guidance around AI capabilities, authentication, and workforce skills.

NIST has invited organizations that are interested in participating in the consortium to submit letters of interest.

The documentation should describe the potential participant’s technical expertise and provide supporting evidence (data and documentation) that demonstrates the applicant’s experience enabling safe and trustworthy artificial intelligence (AI) systems through NIST’s AI Risk Management Framework (AI RMF).

Desirable areas of technical expertise include:

Consortium members will be expected to support consortium projects and contribute facility space for consortium researchers, webinars, workshops and conferences, and online meetings.

???Selected participants will be required to enter into a consortium Cooperative Research and Development Agreement (CRADA) with NIST.

CRADAs allow federal and non-federal researchers to collaborate on research and development (R&D) projects and share resources, expertise, facilities, and equipment. At NIST’s discretion, entities that are not legally permitted to enter into this type of agreement may still be allowed to participate.

The Future of USAISI

According to NIST Director Laurie E. Locascio, the U.S. AI Safety Institute Consortium will enable close collaboration among government agencies, companies, and impacted communities to help ensure that AI systems are safe and trustworthy.

Working groups for the US Artificial Intelligence Safety Institute are currently being created to support the development of responsible AI.

The standards, guidelines, best practices, and tools developed by USAISI are expected to align with international guidelines and influence future legal and regulatory frameworks for AI around the globe.

What can we do together?
Nist.gov
Advertisements

Related Terms

Margaret Rouse
Technology Specialist
Margaret Rouse
Technology Specialist

Margaret is an award-winning writer and educator known for her ability to explain complex technical topics to a non-technical business audience. Over the past twenty years, her IT definitions have been published by Que in an encyclopedia of technology terms and cited in articles in the New York Times, Time Magazine, USA Today, ZDNet, PC Magazine, and Discovery Magazine. She joined Techopedia in 2011. Margaret’s idea of ??a fun day is to help IT and business professionals to learn to speak each other’s highly specialized languages.

',a='';if(l){t=t.replace('data-lazy-','');t=t.replace('loading="lazy"','');t=t.replace(/