Qa Featuredmarquee

Q&A with Jason Pruet

Kyle DickmanScience Writer

Share

A conversation about AI for science with Jason Pruet, Director of the Laboratory’s National Security AI Office.

March 31, 2025

Download a print-friendly version of this article.

Jason Pruet is working with teams across the Laboratory to help prepare for a future in which artificial intelligence will reshape the landscape of science and security. Five years ago, he viewed AI as just another valuable tool, but because of recent advances in the power of large AI models, Pruet now believes AI will be broadly disruptive. He no longer views the technology as just a tool, but as a fundamental shift in how scientists approach problems and make discoveries. The global race humanity is now in is about how to harness the technology’s potential while mitigating its harms.

1663:  This year, the Lab invested more in AI-related work than at any point in history. You’ve spoken about government investment in AI in terms of returning to a post–World War II paradigm of science for the public good. Can you expand on that?

JP:  Before World War II, the government wasn’t really involved in science the way we think of it today. But after WWII, Vannevar Bush, a key figure behind the Manhattan Project, laid the groundwork for permanent government support of science and engineering. I’m paraphrasing here, but he had this beautiful quote where he said, “Just as it’s been the policy of the government to keep the frontiers of exploration open for everyone, so it’s the policy of the government that the frontiers of knowledge are open for everyone.”

That uniquely American idea helped build the American Century. After the war, Los Alamos leadership realized that the future of security and science depended on the ability to study energetic particles and nuclear reactions. The problem was that no university could do it because they didn’t have the means to build these giant machines. And the Lab couldn’t do it without the support of the universities, so they made a deal where the Atomic Energy Commission would pay for these giant facilities, like the Stanford Linear Accelerator Center. Without that kind of infrastructure, the country had no credible way of being a scientific superpower anymore. 

For a variety of reasons, government support for big science has been eroding since then. Now, AI is starting to feel like the next great foundation for scientific progress. Big companies are spending billions on large machines, but the buy-in costs of working at the frontiers of AI are so high that no university has the exascale-class machines needed to run the latest AI models. We’re at a place now where we, meaning the government, can revitalize that pact by investing in the infrastructure to study AI for the public good.

1663:  That’s a fascinating parallel. You mentioned the massive infrastructure required for cutting-edge AI research. Is that something universities can collaborate on with Los Alamos?

JP:  Exactly. Part of what we’re doing with the Lab’s machines, like Venado—which has 2500 GPUs—is giving universities access to that scale of computing. The scale is just completely different. A typical university might have 50 or 100 GPUs.
Right now, for example, we have partnerships with the University of California, the University of Michigan, and many other universities where researchers can tap into this infrastructure. That’s something we want to expand on. Having university collaboration will be critical if the Department of Energy is going to have a comprehensive AI program at scale that is focused on national security and energy dominance. 

1663:  What changed in the last few years to enable this rapid progress? Is it just bigger models, or is there something else at play?

JP:  One of the biggest shifts came from a 2017 paper called “Attention Is All You Need,” written by a small group of Google researchers. This paper introduced the transformer architecture, which allowed for a huge leap in how we could scale AI models. It turns out that the bigger the model, the better it performs. Transformers also allow for mixing different types of information—text, images, equations—all within a single framework. 

1663:  Has your perspective on AI shifted since the development of the transformer model?

JP:  Definitely. There was a time when I wouldn’t have advocated for government investment in AI at the scale we’re seeing now. But the weight of the evidence has become overwhelming. Large models— “frontier models”—have shown such extraordinary capabilities with recent advances in areas as diverse as hypothesis generation, mathematics, biological design, and complex multiphysics simulations. The potential for transformative impact is too significant to ignore. 

1663:  Can you describe where we are on the trajectory of AI development?

JP:  That’s a tricky question because we really don’t understand the full potential of this technology yet. The AI community uses different benchmarks to test the capabilities: one each for math, verbal reasoning, symbolic reasoning, theory of mind, the bar exam. Over the last two years, we’ve more or less run out of benchmarks where AI isn’t better than humans. An exception is for a particular class of abstract reasoning, though I’d be surprised if that benchmark doesn’t also fall in the next year. 

The basic problem with this technology is that we don’t fully understand it. We have no predictive ability to say, “Oh, if I do 10 times more compute, then this will happen.” My colleague Juston Moore has emphasized that there would be a great strategic advantage for the nation that first develops an ability to scientifically understand AI models. Absent that, there’s a deep argument within the community about whether we’re near the top or the bottom. What is clear is that the speed of progress is faster than anyone could have predicted, and there’s no indication that it will slow anytime soon.

1663:  Is it fair to think of this moment as an international arms race over AI?

JP:  I want to start with a historical analogy. In the industrial revolution, the focus shifted from humans and animals doing manual labor to building machines for mechanical labor. In the AI revolution, we’re going from cognitive labor being done by humans to cognitive labor being done by machines. If you’ve played with the most recent AI tools, you know: They’re very good coders, very good legal analysts, very good first drafters of writing, very good image generators. They’re only going to get better. Viewed through the lens of machines becoming the basis for the next generation of cognitive labor, it’s obvious what the strategic significance of these tools is.

So yes, it’s becoming more likely that AI will be a means by which nations gain strategic and potentially decisive advantages. China’s leadership sees AI as a general-purpose technology, much like electricity or the internal combustion engine—something that can drive progress across many areas of life, from the economy to defense. After DeepMind’s AlphaGo defeated the Chinese grandmaster Ke Jie in May 2017, within three to six months, China’s government launched a national AI strategy. What China saw from that was that if your goal is strategic dominance through subtle means, then for the first time in human history, we have a technology that can do that.

Let me give you another example of how AI is already changing power structures. I have a friend, Chuck Mielke, who works here at the Lab and often reviews papers written by researchers from rural Chinese universities. For years, he’d get these papers, and they were so hard to get through, so hard to understand. Then, suddenly, new AI technologies came out, and the papers he received were beautifully written and argued. For decades, the U.S. has had a structural advantage because English is the dominant language in scientific literature. AI eroded that overnight. 

All that said, I’m increasingly uncomfortable viewing this through the lens of a traditional arms race. Many thoughtful and respected people have emphasized that AI poses enormous risks for humanity. There are credible reports that China’s leadership has come to the same view, and that internally, they are trying to better balance the potential risks rather than recklessly seek advantage. It may be that the only path for managing these risks involves new kinds of international collaborations and agreements.

1663:  What’s the message you want scientists at the Lab to take away from all of this? 

JP:  My sense is that most of our researchers have a deep appreciation of the significance of these technologies. Whether it’s for scientific research, manufacturing, operations, or even legal and public affairs, AI is going to be the driving force behind how we do things moving forward. This isn’t just a tool; it’s a fundamental shift in how we approach problems and make discoveries. We’re at a point now where the potential of the technology has been demonstrated to such an extent that the question is really about the pace of adoption. The recent release by OpenAI of new models capable of pretty good step-by-step reasoning only reinforces this view.

1663:  How does that make you feel?

JP:  Like we’re behind. The ability to use machines for general-purpose reasoning represents a seminal advance with enormous consequences. This will accelerate progress in science and technology and expand the frontiers of knowledge. It could also pose disruptions to national security paradigms, educational systems, energy, and other foundational aspects of our society. As with other powerful general-purpose technologies, making this transition will depend on creating the right ecosystem. To do that, we will need new kinds of partnerships with industry and universities.

1663:  Do you think we’re heading toward a future where every country will develop its own AI capabilities?

JP:  For nations that can afford it, yes. Countries like the UK, China, and France have already been clear about their intentions to develop their own sovereign AI capabilities. The United Arab Emirates invested billions of dollars from its sovereign wealth funds because it recognized the significance of this technology. It’s like—you can’t have somebody else control your oil, you can’t have somebody else control your food, and now, you can’t have somebody else control your AI.

To get back to the Vannevar Bush thing we were discussing earlier: One could say, “Why don’t we just let private industry build these giant engines for progress and science, and we’ll all reap the benefits?” The problem is that if we’re not careful, it could lead us to a very different country than the one we’ve been in. We certainly need to partner with industry. Because they are so far ahead and are making such giant investments, that is the only possible path. But at the same time, we’ll need to figure out how to preserve Vannevar’s pact.

People also ask

  • What are some of the risks of AI? The doomsday scenario is that AI becomes more powerful than humans, posing an existential risk to humanity, but far more likely, nearer term concerns include job replacement, privacy breaches, and threats to security. All of these risks and more factored into the decision for Los Alamos National Laboratory to invest heavily in understanding and shaping the technology to prevent harm and help realize its vast potential for improving the world.
  • How can AI help in my job? From automating processes to drafting memos or optimizing supply chains—AI can help accelerate many aspects of a wide variety of modern jobs. As one expert at Los Alamos National Laboratory puts it, “AI isn’t just a tool; it’s a fundamental shift in how we approach problems.” The task ahead for many workers will be adopting it quickly, smartly, and securely.