By General Joseph F. Dunford

In the U.S. military and intelligence communities, we can’t cut corners. We must equip our highly trained people with the most advanced weapons and the most powerful technology available—because lives, missions, and the defense of freedom depend on it.

So why would we shortchange our troops and analysts by undermining artificial intelligence (AI), the very technology that’s quickly becoming one of the most decisive capabilities in the 21st Century?

Today, AI is beginning to deliver meaningful operational gains for the Defense Department (DoD) and the Intelligence Community (IC). AI is being used to simulate battlefield conditions in training, process vast amounts of information in support of decision making and intelligence, enhance cyber defense, enable real-time combat system updates, and field autonomous weapons systems.

If we underinvest in American AI or place heavy-handed restrictions on its development, we don’t just risk falling behind in innovation. We risk falling behind on the front lines.

Unfortunately, that risk is growing. Policymakers of both parties have proposed more than 1,000 AI rules at the state level, many of which, while well-intentioned, could slow the development of powerful, American-made AI models and hand our adversaries a lasting advantage.

Meanwhile, misguided efforts to rewrite longstanding fair use copyright principles or overly restrict the data AI systems can be trained on may sound like minor regulatory tweaks, but in reality, they strike at the heart of how these systems function. And their consequences for national security could be severe.

Why does training data matter so much? Because large language AI models (LLMs) don’t “reason” like humans—they learn patterns from the data they’ve seen. The broader and more representative the training data, the more accurately they can interpret complex scenarios, spot emerging threats, or generate mission-relevant insights.

For national security missions, that means combining high-quality, publicly available data with mission-specific, classified datasets. While models used by the DoD and IC will most often be specially trained for internal needs, they still benefit enormously from being built atop strong commercial foundations. The goal is simple: avoid creating dangerous “blind spots” in AI models that could delay time-sensitive decisions or miss signals altogether. If U.S. law makes it harder for our own models to learn from publicly available content, while our adversaries face no such limits, we handicap ourselves by design.

Let’s be clear: some guardrails are necessary, especially where AI applications intersect with life-and-death decisions, such as weapons of mass destruction or autonomous targeting systems. But general-purpose AI models, used across commercial and government settings alike, must be trained comprehensively if they’re going to help safeguard U.S. national security and support mission-critical operations.

Case in point: Two leading U.S.-built AI models—Meta’s Llama and Anthropic’s Claude—were recently approved for use in Amazon Web Services’ secure government cloud after meeting the highest federal security standards. These models are now cleared to support sensitive defense and intelligence operations in secure, compliant environments.

This transformation holds enormous promise for U.S. military and intelligence operations—but only if we enable American AI to innovate at the pace required to remain globally competitive. If we stumble, we will jeopardize the very tools and technologies that help further strengthen our warfighters and analysts, an outcome our foreign adversaries like China, Russia, and Iran would welcome.

Beijing has made AI a cornerstone of its military doctrine and is investing more than $1.4 trillion in AI development by 2030. It has launched hundreds of university programs and is aggressively exporting both open- and closed-source models worldwide, all of which are aligned with authoritarian values of surveillance, censorship, and centralized control.

The stakes are high. The country that leads in AI will not only shape the global economy, but also shape the global order. The Administration’s recent AI Action Plan is a strong first step toward meeting that challenge. Now, Congress must codify and expand it to ensure enduring national advantage.

Continue reading at Real Clear Defense.