Hopes and Fears for President Trump’s AI Action Plan

By Nicholas Garcia
July 22, 2025

When Donald Trump returned to the White House buoyed by the support of tech billionaires, one of the first orders of business was rescinding the Biden administration’s executive orders on artificial intelligence. Tomorrow, President Trump intends to announce a new AI action plan, likely to be accompanied by new executive orders of his own. This “Winning the AI Race” action plan will be influenced by more than 10,000 responses the White House Office of Science and Technology Policy (OSTP) received in response to its request for comments. Public Knowledge was one of those commenters, writing in full hope that the Trump administration would adopt policies in accord with its stated goals “to promote human flourishing, economic competitiveness, and national security.”

However, a survey of policy proposals from other commenters, the actions of the Republican Congress, last week’s press reporting, and rhetoric from key Trump administration advisors gives us reason to fear that President Trump’s plan will fail to meet those ambitions and instead simply cede control of AI’s future to the private sector. That would be a profound failure of leadership at a pivotal moment in the development of this critical technology. In this post, we review what we still hope President Trump’s action plan might contain, while describing what there is to fear in an action plan that sells out the American people.

Public Knowledge’s Priorities for an AI Action Plan

Ideally, we would hope to see the White House adopt the priorities that we have assessed to be the most important for delivering on an innovative and competitive AI ecosystem. In our comments, Public Knowledge encouraged the administration to focus on four priorities:

  1. Protect the rights to read and learn that make AI training possible.
  2. Support open-source, collaborative research and development.
  3. Build and maintain public physical and digital AI infrastructure to prevent monopolization and private enclosure.
  4. Develop standards and sensible rules around AI explainability and transparency to ensure trust and adoption.

You can read all about why these priorities are so critical in our comments, as well as our other writing about AI training, the importance of open source, the need for Public AI infrastructure, and our approach to AI accountability. We maintain that if America is truly interested in “winning the AI race,” then we need an open, innovative, competitive, and dynamic AI ecosystem that users trust. Without a focus on these four priorities, we are looking at an AI sector dominated by Big Tech, infrastructure projects that line the pockets of crony capitalists, and opaque and unsafe AI systems that continue to attract suspicion and cause harm.

Winning the AI race is only important if the American people are the winners. Winning the AI race should mean shared prosperity, AI that reflects our diverse and pluralistic free society, and technology that we can understand and trust. Racing ahead blindly, without thought to consequence or direction, is not the path to winning anything.

Holding Out Hope

Obviously, the Trump administration adopting our four priorities would be ideal, but there are a couple of narrower policy choices roughly in line with our priorities that could make it into the action plan. These hopes are based on policy priorities that attracted support from other key commenters and fit with the administration’s innovation-focused agenda.

Hope: Prioritizing AI research and investment in scientific institutions

The most obvious policy priority for winning the AI race would be a focus on supporting AI research and development. To ensure America’s continued leadership and technical edge, the Trump administration should support, fund, and promote AI research. Unusually, AI advancements have emerged from private and nonprofit labs whereas many other important technologies originated through government research support. That means there is both a need and opportunity to activate our public universities, national labs, and other scientific institutions. At a moment when the Trump administration has been defunding important scientific research and threatening the funding for colleges and universities, this action plan should be a moment to reverse course and support science, innovation, and learning.

Commenters across sectors agree that the federal government must do more to strengthen public research capacity and expand access to the resources that power AI innovation. Their recommendations differ in emphasis—but share a core conviction: that sustained, well-targeted public investment in R&D is essential for maintaining U.S. leadership, enabling breakthroughs in science and national security, and ensuring that AI development benefits the broader public.

Google emphasizes that “long-term, sustained investments in foundational domestic R&D and AI-driven scientific discovery” have historically given the U.S. a global advantage—and that now is the time to “significantly bolster these efforts.” Google’s comment calls for faster allocation of funding for early-stage research and broader availability of “essential compute, high-quality datasets, and advanced AI models” to scientists and institutions. Lowering these barriers, Google argues, will allow the American research community to focus on innovation instead of resource acquisition. Google also encourages the government to invest in federal prize challenges for unsolved scientific problems, expand partnerships with national labs in key areas like cybersecurity and biosecurity, and make government datasets available for commercial training and experimentation.

Encode similarly advocates for boosting public research institutions, especially through investment in AI for science. Encode points to the critical role that DARPA, Stanford, and other federally supported institutions played in the development of foundational technologies—from neural networks to the internet—and warn that the current “lack of computational resources and access to critical data is stifling innovation” at U.S. universities. Encode’s comment calls for permanently establishing and funding the National AI Research Resource (NAIRR) to provide institutions with the compute, data, software, and training needed to advance. Encode envisions a “superhighway for science” that connects universities, national labs, and industry partners into a coordinated ecosystem—dramatically accelerating the timeline from research to real-world impact.

Georgetown University’s Center for Security and Emerging Technology (CSET) likewise recommends expanding federal AI R&D across universities, national labs, federally funded R&D centers (FFRDCs), and nonprofits. Georgetown’s comment underscores the importance of investment in both technical and non-technical research, as well as the infrastructure and data required to support AI for science, especially in strategic sectors like biotechnology.

The Federation of American Scientists (FAS) emphasizes that while American AI leadership has benefited from private investment, “critical high-impact areas remain underfunded.” The FAS proposes a federal agenda focused on expanding access to data, funding overlooked areas of research, and defining national priority challenges. In particular, they support scaling the NAIRR from pilot to full program, and integrating its resources with other proven government initiatives such as the NIST AI Safety Institute, the AI Use Case Inventory, and the Department of Energy’s Office of Critical and Emerging Technologies (CET). The FAS also calls for the creation of a dedicated AI and Computing Laboratory at DOE, modeled after ARPA-E, to enable rapid procurement, hiring, and academic-industry partnerships.

Together, these proposals offer a clear and coherent roadmap. An action plan that prioritizes AI R&D, expands public access to compute and data, and invests in the public institutions that make scientific progress possible would not only serve national competitiveness—but would also embody a public-interest vision of innovation.

Hope: Embracing open source, especially in government procurement

A second priority that we hope might appear is a focus on open source AI. The Trump administration has begun to loosen certain export controls related to AI, and appears to be driven by concerns about global competition and the adoption of American AI abroad. While we believe that building trust through leadership on accountability would go a long way promoting adoption both at home and abroad, another key move will be an embrace of open source AI.

Open source AI offers considerable benefits for security, transparency, evaluation, and accessibility. Promoting the development of a robust open source AI ecosystem, like the robust open source software ecosystem that is ubiquitous and foundational today, would significantly advance the pace of innovation and bolster U.S. competitiveness.

The Trump action plan could embrace open source AI as a preference in government procurement and use. This would leverage the size and significance of the federal government to encourage broader open source development. In addition, it would allow the government to harness the benefits of open source for itself, including cost-savings due to the underlying model being free; better transparency; and greater customization and secure processing. There is some reason to hope this message reaches the administration: This idea was embraced by civil society organizations like the Electronic Frontier Foundation and the Open Source Initiative, academic experts like those at Georgetown’s CSET, and even by AI companies and venture capital firms like Andreessen Horowitz

Meta, which produces Llama (one of the most widely used open weights models), wrote in detail about how the government ought to prefer open source for government use to strengthen U.S. security while reducing costs and saving taxpayer dollars, forgo restrictions on open source release in order to compete with China’s open source models like DeepSeek, and drive innovation and economic prosperity.

Even OpenAI, which has often failed to fulfill the expectations its name would imply with its closed models and secretive data practices, wrote that the U.S. should develop a policy to ensure we are exporting open-sourced models to ensure American leadership in AI around the world—and it has recently announced that it intends to release an open weights model of its own.

An action plan that embraces open source AI—especially through federal procurement—would align with both the administration’s competitive agenda and the public interest. By backing open source as a strategic asset, the Trump administration could lower barriers to entry, enhance national security, and empower a broader set of innovators to contribute to the future of AI. It would also send a clear signal: that American leadership in AI is not defined by corporate secrecy or closed systems, but by openness, collaboration, and the freedom to build.

Well-founded Fears

While there are reasons to hope for sensible, pro-innovation policies in President Trump’s AI action plan, there are equally strong—and perhaps better founded—reasons to worry. 

In particular, the Trump administration’s ideological agenda threatens to undermine democratic principles under the guise of neutrality and “anti-woke” rhetoric, create legal loopholes for AI companies to avoid regulation, and sell out the American people on AI infrastructure—exactly when we should be investing in the future on the ground floor. 

Fear: A continued un-American attack on diversity and equity under the guise of neutrality

Recent press reports suggest the White House is preparing an executive order targeting so-called “woke” AI models as part of its action plan. According to The Wall Street Journal, “The order would dictate that AI companies getting federal contracts be politically neutral and unbiased, an effort to combat what administration officials see as overly liberal AI models.” But the Trump administration and its allies do not have a record of neutrality or even-handedness. Despite claiming to champion free expression, their policies have repeatedly demonstrated censorship, bias, and discrimination.

We should dispense with the notion, up front, that the government can or should aim for political neutrality in its use of AI. The government not only can have a viewpoint, in a democracy, it must. A functioning democracy is about building institutions that reflect our shared values—not pretending that neutrality is required when fundamental rights and freedoms are at stake.

There is a mistaken claim that the Biden administration acted inappropriately by embracing America’s strength as a diverse nation with a commitment to justice and equality. President Trump repealed former President Biden’s Executive Order on “Safe, Secure, and Trustworthy Development and Use of AI” that simply required federal agencies to ensure that AI systems adopted by the government did not promote bias or discrimination in certain high-risk sectors like housing and healthcare, and respected existing civil rights laws. But dismantling these policies that promote equity is not a return to neutrality—it is the imposition of a new and narrower ideology.

There is no such thing as a value-free or neutral AI system. All technology reflects choices, priorities, and trade-offs. Design encodes values—just as law and policy do. That’s precisely why leadership in AI matters: We want AI that encodes democratic values, not authoritarian ones. If America hopes to outpace its rivals, especially China, it must do so by building systems that reflect openness, pluralism, and human dignity.

The real concern is that the Trump administration will not stop at repealing Biden-era equity policies—it will replace them with new ideological mandates cloaked in the language of “neutrality” and “fairness.” But we have already seen what this language has been used to justify.

When marking up the “Future of AI Innovation” Act last Congress, Senator Ted Cruz (R – Texas) amended the bill with a so-called anti-woke amendment that prohibited certain policies. Some of the prohibitions at the top seemed to make sense (e.g., cannot promote that one race or sex is inherently superior to another) but further down the list the real hand was shown: It explicitly prohibited policies that AI “should be designed in an equitable way that prevents disparate impacts based on a protected class or other societal classification.” It explicitly prohibited policies aimed at preventing disparate impacts due to bias in training data, and explicitly prohibited even doing impact assessments or promoting use of technology or techniques to “ensure inclusivity and equity in the creation, design, or development of the technology.” This did not pass into law, but it is shocking how blatant and explicit the agenda is here. This is not a desire for neutrality, or a staid disagreement about the values of meritocracy: It is a reactionary attack on the hard-won principles of justice and equality that have long animated America’s best aspirations. 

The Trump administration itself has been aggressively dismantling diversity, equity, and inclusion programs since the day President Trump took office. But again, this is not an effort to simply restore “neutrality” or change employment rules, as President Trump has repeatedly embarked on a campaign of removing women and racial minorities from government websites, including heroic veterans like the Navajo Code Talkers. He has also directly attacked the identities of transgender and nonbinary people with an Executive Order that blatantly and unconstitutionally promotes discrimination on the basis of sex. That Order led to a Federal Trade Commission workshop earlier this month that misused Commission authority to undermine and delegitimize well-established medical practices around gender-affirming care, under the guise of consumer protection. This pattern makes it clear that supposedly neutral sounding efforts are, in fact, a smokescreen for discrimination against vulnerable minority communities. These are not the “American values” we want encoded into AI.

The biggest movie in the nation right now is “Superman,” and I think we can turn to him for some guidance:

In a 1950 anti-bigotry PSA, Superman told schoolchildren that “our country is made up of Americans of many different races, religions, and national origins,” and that when you hear someone speak against a classmate because of who they are, “that kind of talk is un-American.” That message was simple, clear, and patriotic. It still is. Rebranding discrimination as “neutrality” doesn’t make it any less discriminatory. If we want AI that reflects American values, we should build systems rooted in fairness, equality, and the belief that diversity is a strength. Anything less is not just bad technology—it’s un-American.
Image from a Superman Comic, where Superman addresses a small crowd of teenagers saying, "...and remember, boys and girls, your school — like our country — is made up of Americans of *many* different races, religions, and national origins, so... if you hear anybody talk against a schoolmate or anyone else because of his religion, race, or national origin—don't wait: tell him THAT KIND OF TALK IS UN-AMERICAN. Help keep your school All-American!"

Another significant fear is that the Trump administration’s AI action plan will create broad legal loopholes for AI companies—failing to enforce existing laws and preempting new ones—thereby undermining the legal frameworks that protect the public. The Trump administration’s deregulatory posture, combined with recent industry lobbying, raises concern that the action plan will seek to preempt state laws, emphasize voluntary industry standards, and develop so-called “regulatory sandboxes“—carveouts that exempt AI companies from existing consumer protection, civil rights, and safety laws under the pretense of fostering innovation. These light-touch approaches to regulation may each have a place in fostering the growth of an emerging technology, but they each need to be paired with stronger, enforceable rules in different ways.

The federal government should not prevent states from stepping in where Congress has fallen short. Recently, Public Knowledge strongly opposed a proposed 10-year moratorium on state AI regulation—a ban so sweeping and extreme that it was eventually defeated 99-1 in the U.S. Senate. Yet the sentiment behind it lingers. Some industry-aligned proposals continue to push for broad preemption of state AI laws without offering any meaningful federal safeguards to replace them. That kind of preemption—where the federal government takes away states’ power to regulate but refuses to do the job itself—is a recipe for disaster. Yes, a confusing patchwork of state rules might eventually become burdensome, but it would be a mistake to jump to broad preemption without a uniform regulatory regime to replace it.

In the meantime, states have long served as laboratories of democracy, often leading the way on issues like consumer protection, environmental standards, and digital rights. If the federal government wants to preempt states with a national standard, it should do so—but only if those standards are real, enforceable, and smarter than what states are already doing. 

Similarly, developing voluntary industry standards for AI systems through broadly-inclusive stakeholder processes may be a good and necessary step, but it cannot be the endpoint. Without meaningful enforcement, standards are suggestions, and AI is too important to let companies simply regulate themselves. In the absence of real rules, we risk replacing an outdated regulatory system with an empty one—one that looks slick and innovative on paper but does nothing to safeguard civil rights, consumer protection, or national security in practice.

Finally, we already have laws on the books like from the Fair Housing Act, state-level privacy laws, and other sector-specific regulation that can be applied to AI. These laws should be enforced. But there is a danger that AI companies will be allowed to operate in a gray zone, shielded from liability simply because their systems are new or difficult to interpret. That would not be innovation, it would be evasion. To be clear, we do not oppose experimentation, sandboxes, or flexible regulatory tools when used appropriately. But creating loopholes that let powerful firms skirt the law is not sound policy—accountability and liability should rest with the firms that have the resources and ability to best protect the public from downstream harms.

To really drive innovation forward requires trust. The American people need AI systems that are safe, fair, and subject to the rule of law.

Fear: Selling out America on AI infrastructure

Last, but not least, there is real reason to fear that the Trump administration is simply going to get suckered into making bad deals that sell out this moment of opportunity. Analysis of the comments submitted to the OSTP indicates that building AI infrastructure—from data centers to power generation capacity to electrical transmission lines—emerges as a “near universal concept amongst Big Tech firms.” Given the administration’s recent cheerleading for billions of dollars invested in Pennsylvania, it seems likely that there will be significant focus on infrastructure in the AI action plan. Yet if the Republican budget bill and its devastation of green energy and handouts to oil and gas companies is any indication, then there is reason to fear that President Trump will let the opportunity to affirmatively invest in smart, sustainable, critical public infrastructure slip through his fingers.

Public Knowledge has been outspoken about our support for Public AI infrastructure, including in our comments to the OSTP under both former President Biden and President Trump. And in this proceeding, plenty of other commenters joined in as well, for example: The Open Markets Institute wrote about the critical competitive advantages public utility regulation and public infrastructure would provide; Mozilla wrote in explicit support of Public AI infrastructure like the NAIRR and the Department of Energy’s Frontiers in Artificial Intelligence for Science, Security and Technology (FASST) initiative; Encode wrote in support of NAIRR; and a network of academics highlighted the national security case for publicly owned AI tech stack components. 

Despite all this support for using public dollars to invest in public infrastructure, President Trump’s AI action plan could wind up instead focusing on infrastructure strategies that only cater to the private sector; even going so far as to give away federal dollars or resources without any return for the American people. That would be a monumentally bad deal.

When it comes to energy infrastructure, there seems to be another bad deal brewing: It seems like the Trump administration plans on boosting dirty energy projects to the exclusion of green. This is an environmental and climate change issue to be sure—AI data centers have significant energy needs and civil society commenters have warned the OSTP about those dangers. But this is not just about sustainability: If you share the belief that AI success requires massive demand for power, then we want to use everything at our disposal to get there! We need massive investments in wind, solar, geothermal, and nuclear power to prepare for a high-tech future. This should not be a partisan issue: Texas—a traditional bastion of the oil and gas industry—has brought online more solar power than any other state. China is seizing more and more of the solar energy market, and preparing for AI by investing in American dominance across the energy sectors of the future is the best move our nation could make. Yet, instead it may be the case that President Trump’s action plan sells out the possibility of building critical energy infrastructure to appease cronies in the oil and gas industries.

A People-powered Plan

At Public Knowledge, we believe that technology policy must begin and end with the public interest. Last year, in our post “Putting the Public First” in response to the Senate’s AI legislative roadmap, we laid out a vision rooted in openness, accountability, civil rights, and democratic governance. These are not just abstract principles, but concrete tools to ensure AI systems work for everyone. We don’t believe the government should simply cheer on “innovation” without asking: innovation for whom

If the Trump administration’s “Winning the AI Race” plan turns out to be what we fear, then we will need strong alternatives rooted in these principles that speak for the people, not just the powerful. That’s why we have also joined with a broad coalition of over 90 tech, economic justice, consumer protection, labor, environmental justice, and civil society organizations in launching the People’s AI Action Plan—a proactive effort to offer a vision for AI that delivers first and foremost for the American people. We are proud to add our voice to this effort, bringing our expertise and adding our vision for a creative and connected future for everyone.

No matter what tomorrow holds, our commitment is clear: We will continue to work with civil society allies, public interest technologists, researchers, industry, and community leaders to advance smart policies that match the vast potential of innovation with the guiding values of the public good. If we stick to that plan, we will build a future where the winners of the AI race are the people.

Stay Informed

Get the best articles every day for FREE. Cancel anytime.