U.S. flag

An official website of the United States government

Dot gov

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Https

Secure .gov websites use HTTPS
A lock () or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Breadcrumb

  1. Home
  2. News
  3. Speeches

Was this page helpful?

Remarks by Commerce Secretary Gina Raimondo at the Inaugural convening of the International Network of AI Safety Institutes

AS PREPARED FOR DELIVERY

AI is a technology like no other in human history.

Historically, when new technologies emerge, certain tasks are automated which leads to disruptions in the job market. But over time, workers find new jobs which ultimately made us more productive and more prosperous.

But AI is different. AI won't just automate one tool or task. It has the potential to replace the human mind – and with that, all of us and our work. Some experts even worry it could go off the rails and lead to our extinction.

It’s a technology with tremendous potential but also tremendous risk, and people are understandably worried. Workers are worried it will take their jobs. Parents are worried their kids will be harassed with deepfake explicit images. National security experts are worried AI applied to bioterrorism.

My primary point today is that we have a choice. We are the ones developing this technology, and we will ultimately decide what that looks like. Why would we choose to allow this technology to replace us, cause widespread unemployment and societal disruption, and compromise our global security?

We have an obligation to keep our eyes wide open to these risks and prevent them from happening.  We cannot let our ambition allow us to sleepwalk into our own undoing.

This isn’t about slowing innovation. But if we allow AI competition to be reckless, our world will be less safe. A single-minded focus on speed and productivity will drive us down a dangerous path. We have learned the painful lesson that unchecked market forces and the blind pursuit of profits do not always lead to good outcomes. 

For example, we learned this lesson with our supply chains, which became brittle as we optimized solely for efficiency and profit. It's true we saw cheaper goods and higher profits, but American workers watched their jobs disappear, their communities crumble, and our national be security compromised.                                                

We cannot let the same thing happen with AI.

Just because we can do something, doesn’t mean we should. Advancing AI as quickly as possible without thinking of the consequences is not the right or smart thing to do. Instead, we need to innovate, but always keep two principles in mind.

First, we cannot release models that will endanger people.

Second, we need to ensure AI is serving people, not the other way around.

Here’s how we can do it.

First, governments need to know that AI systems are safe. If we can’t certify an AI system is safe, it shouldn’t be released.

That means we need to rapidly advance the science of AI safety – testing and evaluation. And we need to agree on international best practices and rules of the road.

Advancing the science of AI safety is fundamental, but it is not easy. And while we aren’t where we need to be yet, we – you! – should be encouraged by the progress this international community has made in the last year. A little over a year ago, there were no major government initiatives to advance the science of AI safety. Today, there are 10 members of the International AI Safety Institute Network, with more underway.

In the U.S., the Commerce Department’s AI Safety Institute has been laser focused on one question: how to root out national security and public safety risks from AI. A team of scientists and engineers are developing state-of-the-art methodology for pre- and post-deployment testing, and we’re ensuring we fire on all cylinders to evaluate models.

The Institute is not a regulator, but rather a body of scientific excellence that ensures the work government is doing on AI is smart, cutting-edge, and informed by science. We’ve had some of the brightest minds from industry and civil society come to the Institute to do this important work, and I hope more people will join them.

Government work isn’t as lucrative as the private sector, but it’s just as important, if not more important, to the safety of our country and the success of humanity.

And this work is bigger than politics – it’s not in anyone’s interest for dangerous AI to get into the hands of malicious non-state actors that want to cause destruction and sow chaos. 

I also want to be clear that the Institute’s work will not stifle innovation – it’s essential to it. Safety breeds trust, which speeds adoption and leads to more innovation. It also avoids the risk of overly burdensome regulation from government, which could stifle the industry.

That’s why industry has been leading the way on safety – and why industry has pushed for and supported these safety institutes. The U.S. AI Safety Institute has partnered with OpenAI and Anthropic to do voluntary pre-deployment testing. Together with the UK Safety Institute, they just released their first ever joint governmental pre-deployment test of an advanced AI model.

We want the work of the Institute to help industry, not hinder it. And to do that, we need your help and want to work with you.

Next, we need to ensure that AI serves people, and not the other way around. We need to take the threats of widespread unemployment seriously.

Part of this is about managing the transition that is coming and being honest about the magnitude of change. As AI becomes more integrated into society, we will need new and innovative worker training and tech access programs to ensure everybody can participate in the prosperity that AI promises.

We can't just talk about "lifelong learning" and re-training, we need to get much more serious about restructuring how we deliver education for people at every step of a long career.

Part of this is about making choices as a society that benefit humans. Choosing how AI is used to automate tasks to supplement humans, not replace them entirely.

We don’t yet have all the answers. So, part of this is about developing AI at a pace that allows us to adjust, without displacing workers or devastating communities and to think creatively about solutions.

Part of managing this transition is showing people what AI can do for them. We need to work harder to make sure that AI isn’t just safe but will also make people’s lives better.

The flipside of the worst-case scenarios is ensuring that AI supplements us and helps us create a world of incredible abundance. Hundreds of millions of AIs deployed everywhere, helping to engineer solutions to the world’s hardest problems.

For example, right now, it takes years of testing, billions of dollars, and huge numbers of leading researchers to develop a new cancer drug – and 97% of these drugs don’t even make it out of clinical trials. AI can be used at almost every point of the drug development process to accelerate the work medical professionals are doing and make it cheaper.

I want to pause for a moment to underscore just how big this would be. If AI can help us discover cures for cancers even one year sooner than we could now, we could save millions of lives all over the world. And these medicines would be much more affordable.

And we’re not using AI to replace doctors. But instead using AI to help doctors – from interpreting ultrasounds and CT scans so that diseases are caught and treated earlier to performing administrative tasks so doctors can spend more time with patients. It can make health care cheaper and more accessible, so that the best quality care isn’t only available in a small number of cities around the world.

The same can be said for education. Teachers have one of the most important jobs in our society. And as a former Governor, I know that teachers spend too much time on administrative paperwork. AI could reduce that burden so they can spend more time with students and tailor education to those students' needs.

There are so many ways technology can make our lives better, healthier, and more affordable. But only if we build AI in a way that enhances people’s lives.

We are still early in AI’s development, and we have decisions to make. Will this be the technology that pushes humanity to a new age, supercharging productivity, creating new jobs, and bringing benefits to families? Or will it be the technology that pushes humanity aside, leaving millions without work and radically worsening our current income inequality?

My challenge to everyone here is to think critically about what we want the impact of AI to be and how our actions contribute to the development of AI. At every step along the way in the development of AI, we need to think about if and how what we’re doing will help working people here in America and all over the world. You must develop a technology that helps working people reach their full potential – not one that pulls the rug out from under them.

The people in this room and at companies here in this city and across the world will develop the defining technology of this century and write the rules of the road for how it is used. You must do that deliberately and at a pace that works for all humanity – ensuring this incredible technology makes all of us safer, healthier, and more productive.

Thank you.

Leadership