Exclusive: OpenAI Defends its Safeguards and Safety Record in RightToWarn Debate

Why Trust Techopedia
KEY TAKEAWAYS

  • OpenAI has defended its safety record amid employee criticism and public scrutiny in statements to Techopedia.
  • Current and ex-employees are calling for industry-wide safety measures and regulation in an open 'Right to Warn' letter.
  • OpenAI emphasized its commitment to safety, engagement with policymakers, and transparency and pushed back against the 'rush to market' narrative.
  • Like it or not, AI is changing our world — we look to the stewards of the technology to treat that power with great responsibility.

Following the release of an open letter by current and former OpenAI employees and hard-hitting criticism from ex-employees on X, OpenAI has defended its safety record in an exclusive conversation with Techopedia.

A spokesperson for OpenAI told Techopedia today that it is proud of its approach to artificial intelligence safety. “We’re proud of our track record of providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” the spokesperson began.

“We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society, and other communities around the world.”

In wide-ranging comments, OpenAI commented on the issues that have dominated the news in recent months, which has seen a number of members of OpenAI’s super alignment team leave over concerns about how the company handles safety.

OpenAI Candidly Defends Safety Record

In response to the various public debates that have emerged over the last year surrounding responsible AI development, OpenAI spoke about how it handles safety issues, stressing that employees have the right to criticize openly and rebutting the “rush to market” view that many commentators suggest.

OpenAI told Techopedia:

“We have a track record of not releasing technology until we create the necessary safeguards. We finished training GPT-4 in 2022 and spent more than 6 months aligning GPT-4 before release in 2023.

 

“We first developed our Voice Engine technology in 2022 and have yet to broadly release it, instead opting to start a dialogue on what responsible deployment of this technology looks like. We have not yet broadly released our text to video generation tool Sora.”

Controversy Over Ethics At The Top

These comments come just after an open letter, hosted at righttowarn.ai, signed by seven former and four current OpenAI employees, asked OpenAI and the broad industry of AI companies to take steps to ensure safety as AI becomes more widespread — and as the march towards Artificial General Intelligence continues.

These steps include not entering into or enforcing non-disparagement agreements, creating an anonymous process for employees to raise concerns to the board, supporting a culture of open criticism, and asking that employees be free to share concerns with the public without fear of retaliation, such as being fired or sued.

Most of the current and ex-OpenAI employees who signed the letter are anonymous, but named employees include Daniel Kokotajlo, Jacob Hilton, William Saunders, Carol Wainwright, and Daniel Ziegler.

OpenAI: We Want Regulation and Criticism

In direct response to the letter and its call for government regulation of the AI industry, OpenAI said:

“We agree and were among the first in our industry to call for government regulation. We regularly engage with policymakers around the world on this topic and are encouraged by the progress being made.”

OpenAI pointed out that OpenAI has signed on to multiple government-led voluntary commitments, including the recent AI Seoul Summit 2024 Frontier AI Safety Commitments and The White House Voluntary AI Commitments.

When it comes to the ability of current and former employees to share their views publicly, the spokesperson said:

“We have released all former employees from their non-disparagement. We have also removed a non-disparagement clause from our standard departure paperwork.

 

“We have built avenues for employees to raise their thoughts through leadership office hours, an anonymous integrity hotline, Q&A sessions with the board, and ongoing communications with their managers.”

However, the spokesperson did suggest that former employees should be cautious when talking about its technology in public.

“While we believe it is important for former employees to be able to express their views on the company, including critical ones, we believe this dialogue must occur in a way that does not jeopardize the security and confidentiality of the technology we are building.

“This includes protecting it from disclosure to bad actors who want to harm our country, our users, and our company.

“OpenAI and its employees have an ongoing responsibility to protect the technology we are building, including from unauthorized disclosures that could confer benefits on malicious actors, including countries who seek to overtake the United States’ leading edge in this technology.

“Our newly formed Safety and Security Committee, led by members of our board of directors, is aware of specific concerns raised by former employees.

“The committee will carefully review feedback during their 90-day recommendation process and publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”

The spokesperson added:

“Having safe and reliable systems is critical to our business model. Our tools are used by 92% of the Fortune 500. They would not subscribe if our models were not safe, not capable.

 

“We don’t sell personal info, don’t build user profiles, and don’t use that info to target anyone or sell anything.”

The Bottom Line

Commentators often suggest that the rise of AI is as powerful and world-changing as the arrival of the printing press or the Industrial Revolution.

Those who have experienced its power, be it through amazement at its time-saving of tedious tasks or the intellect it often seems to display when working through answers, may tend to agree.

Right now, AI may ‘simply’ be a mix of generative AI and large language models (LLMs) — a ‘stochastic parrot’. But it is seeping into the mainstream, and progress, particularly towards AGI, is only likely to get faster and faster.

How we handle this extraordinary Pandora’s Box is crucial — these few years are likely to be as transformative as the decades-long switch from analog to digital or the decade-long switch to an internet world.

OpenAI gives a decent response to the claims levied against them, but the complaints of the current and ex-researchers do need to guide this chapter. We are lifting the lid of Pandora’s Box — and as the myth goes, it’s not one that humans can close by themselves.

The story also goes that the box brings great troubles or misfortune but also brings hope. We need to keep the creation of AI and AGI pointed squarely towards the latter.

Like it or not, AI is changing our world — we look to the stewards of the technology to treat that power with great responsibility.

Related Terms

Related Article

',a='';if(l){t=t.replace('data-lazy-','');t=t.replace('loading="lazy"','');t=t.replace(/