A Roadmap for Federal AI Legislation
Protect People, Empower Builders, Win the Future
Debates in Washington often frame AI governance as a series of false choices: they pit innovation against safety, progress against protection, federal leadership against the rights of states. But at a16z, we believe these are not binaries. In order for America to realize the full promise of artificial intelligence, we must both build great products and protect people from AI-related harms. Congress can and should design a federal AI framework that protects individuals and families, while also safeguarding innovation and competition. This approach will allow startups and entrepreneurs, who we call Little Tech, to power America’s future growth while still addressing real risks.
At a16z, we take a long view. Our funds have a 10 to 20 year life cycle, which means we care about investing in trustworthy products, strong businesses, and durable markets that will still be thriving years from now. Pursuing short-term valuations at the expense of sustainable tools and healthy markets is bad for the founders we invest in, bad for the investors who trust us with their capital, bad for our firm, and most importantly, bad for American people and businesses. A boom-and-bust cycle that results in AI products that are insecure, unsafe, or misleading would be a failure, not a triumph.
Federal AI legislation should lead us in a different direction, where AI empowers people and delivers social and economic benefits. Smart regulation is essential to ensuring that AI can help our society thrive in the long run and that American startups can compete on the global stage. If there’s one thing central to this vision, it is competition: without competition, consumers get worse products, slower progress, and fewer choices. And Little Tech is central to competition: without startups, large, deep-pocketed incumbents will control the market.
The vision is clear. The question is how to achieve it.
A critical first step is enacting federal AI legislation that sets a clear standard for AI governance. We’ve written and discussed what good AI policy looks like for Little Tech, but with both Republicans and Democrats now calling for congressional action, it’s time to put the key elements in one place. The nine pillars below translate that work into a concrete policy agenda that can keep Americans safe while keeping the U.S. in the lead.
Punish harmful uses of AI.
Protect children from AI-related harms.
Protect against catastrophic cyber and national security risks.
Establish a national standard for AI model transparency.
Ensure federal leadership in AI development, while protecting states’ ability to police the harmful use of AI within their borders.
Invest in AI talent by supporting workers and educating students.
Invest in infrastructure: compute, data, and energy.
Invest in AI research.
Use AI to modernize government service delivery.
1. Punish harmful uses of AI
AI should not serve as a liability shield. When bad actors use AI to break the law, they should not be able to hide behind the technology.
If a person uses an AI system to commit fraud, they have still committed fraud. If a company deploys an AI tool that discriminates in hiring or housing, civil rights law should apply. If a firm uses AI in ways that are unfair or deceptive, that conduct should remain within the reach of state and federal consumer-protection law. The core principle is simple: AI is not a “get out of jail free” card.
A federal AI framework should make that principle explicit:
Ensure that criminal codes, civil rights statutes, consumer protection law, and antitrust apply to cases involving AI. In many of these areas, states and the federal government have overlapping jurisdiction. In consumer protection law, for instance, the Federal Trade Commission enforces prohibitions on unfair and deceptive trade practices (UDAP), while many state attorneys general also enforce their own UDAP statutes.
Direct the Justice Department and other state and federal enforcement agencies to map how those tools work in AI-related cases, identify gaps, and recommend targeted fixes where necessary. If existing bodies of law do not account for certain AI use cases, Congress may need to step in to fill them. Any new law that targets AI-related harms should focus on marginal risk, and use an evidence-based approach to identify the gaps that need to be filled and the optimal approach to filling them.
Provide agencies with the resources—budget, headcount, and technical expertise—to actually bring these cases. In some cases, public-private partnerships may be valuable in providing technical expertise to ensure that prosecutors can prosecute existing law and that judges can recognize AI-based violations when they occur.
Of course, prohibiting people and companies from using AI as a liability shield does not mean that they should be unable to defend themselves. Defendants should still be permitted to use any defenses available in statute or at common law, and in negligence cases, judges should still take account of whether defendants enacted good-faith measures and safeguards—consistent with applicable best practices for their industry and company size—in determining legal liability.
2. Protect children from AI-related harms
AI can harm anyone, but children are uniquely vulnerable. Minors may be less-equipped than adults to protect themselves, and when harms occur, the consequences may be more severe. Because of these vulnerabilities, lawmakers should consider enacting additional protections for children.
As with other online services, children under the age of 13 should be prohibited from using AI services, absent parental consent. It should be noted that because of the challenges of obtaining consent, most technology services prohibit use entirely for younger children. Other minors—anyone between ages 13 and 17 years of age—using AI tools should receive additional protections when providers know users are minors.
In those cases, providers should offer parents meaningful controls: the ability to set privacy and content settings; to impose usage limits or blackout hours; and to access basic information about how a tool is being used. Providers should also present minors with clear disclosures about what the system is and what it is not: that it is AI, not a human; that it is not a licensed professional (for instance, not a licensed mental health care provider); that it is not intended for crisis situations such as suicidal emergencies; and that it is not a replacement for licensed mental health care.
In imposing these requirements, lawmakers should be careful to avoid blanket prohibitions on minors’ ability to use AI. As California Gavin Newsom said when he vetoed a misguided proposal that would have severely constrained minors’ ability to access and use AI products, “We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether.” Lawmaking should be careful not to confuse disempowerment for protection, and should stay within constitutional bounds.
Lawmakers should also require providers to develop protocols for how they handle certain situations, such as instances where a user expresses suicidal ideation or a desire to self-harm. Providers should be required to include information in these protocols on how they will refuse to help users harm themselves—including providing information about methods for committing suicide—and how they will refer users in crisis to suicide prevention resources.
Beyond these responsibilities, lawmakers should also consider ensuring that civil and criminal penalties can be imposed in cases involving harm to minors, such as if AI is used to solicit or traffic a minor. Similarly, prohibitions on assisted suicide should not permit exemptions for cases involving AI.
3. Protect against catastrophic cyber and national security risks
Federal legislation should also improve the government’s understanding of AI’s marginal risks in high-stakes domains like national security. One option is to direct a technical, standard-focused federal government office to identify, test, and benchmark national security capabilities—like the use of AI in chemical, biological, radiological, and nuclear (CBRN) attacks or the ability to evade human control. That work should involve consultation with independent experts and AI researchers to understand existing risks and to establish assessment procedures. Building this type of measurement infrastructure will help ensure that policy responses are proportionate: capabilities should be managed based on evidence, not headlines. The same evidence-based approach should guide how policymakers think about AI’s role in both offensive and defensive cyber operations.
AI is poised to enhance the ability of nation-states, transnational cybercrime organizations, and lone wolves at greater scale and with increasing sophistication. As AI technologies become more accessible, they allow even those with minimal technical skills to carry out sophisticated attacks on critical infrastructure. As a result, while only the most sophisticated nation state actors and cyber-criminal organizations engage in such attacks today, AI could allow a greater number of nation-state and other threat actors to do so in the future. But unlike some technologies that create asymmetric offensive and defensive capabilities, AI does not create net-new incremental risk since AI enhances the capabilities of both attackers and defenders. A federal framework must empower, not hamstring, the defensive use of AI. Limiting our defensive strategies can create artificial asymmetries that make it easier for attackers to target critical infrastructure.
Information sharing among AI companies about the potential misuse of models for cybercrime is a critical countermeasure in combating cyberattacks but antitrust concerns can limit how much information is shared. Targeted exceptions to permit such sharing where necessary are therefore an essential safeguard. The financial system is particularly exposed to cyberattacks because of the central role it plays in monetizing such activity. Yet financial institutions are hamstrung by archaic model validation rules that frustrate the implementation of AI defenses. Legislative and regulatory changes should be enacted to remove these barriers. Finally, the government should procure and deploy state-of-the-art defensive AI solutions.
4. Establish a national standard for model transparency
Transparency can help people make informed choices about the AI products they use. Just as nutrition labels provide basic information that give consumers the ability to make good choices about the food they eat, disclosing a set of “AI model facts” can help people make good choices about how they use AI models.
At the same time, government mandates that require companies to disclose information can present challenges. Government-imposed disclosure rules face constitutional constraints: they may be unconstitutional if they obligate companies to disclose information that is not factual, controversial, or unduly burdensome. Overly broad or onerous mandates are especially challenging for Little Tech, which cannot absorb compliance costs the way large incumbents can. For Little Tech, burdensome disclosure requirements threaten their ability to compete. As Jennifer Pahlka, a former White House deputy chief technology officer, has written, “paperwork favors the powerful.”
Mandates also might be problematic if they fail to provide consumers with useful information. Transparency for transparency’s sake adds costs without adding value. Lawmakers should design any transparency obligation with people in mind: what information enables people to make decisions that are consistent with their preferences?
If the goal is to require transparency that is useful for consumers, lawful, and not unduly burdensome for startups, then lawmakers should consider requiring disclosure of the following information for the developers of base models:
Who built this model?
When was it released and what timeframe does its training data cover?
What are its intended uses and what are the modalities of input and output it supports?
What languages does it support?
What are the model’s terms of service or license?
Less powerful models should be exempted from this requirement, and disclosures shouldn’t require a company to reveal trade secrets or model weights.
5. Ensure federal leadership in AI development, while protecting states’ ability to police harmful use of AI
Recent debates about AI governance often present federal and state roles as mutually exclusive: the federal government has sole authority to regulate AI because it involves interstate commerce, or states have unbounded authority to regulate AI because states are laboratories of democracy and Congress has not yet enacted comprehensive AI legislation.
Neither extreme captures how the Constitution allocates power between state and federal governments: both states and the federal government have important roles in regulating AI. Congress should craft rules that govern the national AI market, while states should regulate harmful uses of AI within their borders. That means that Congress should take the lead in regulating model development, since open source and proprietary tools will necessarily travel across state lines. It also means that states should have the ability to enforce their own criminal and civil laws to prohibit harmful uses of AI in areas like consumer protection, civil rights, children’s safety, and mental health. And in some areas that traditionally fall within the domain of state lawmakers, like insurance and education, states may take the lead.
A federal framework can help to clarify these respective roles by expressly establishing congressional leadership in regulating AI development, while including safe harbors to clarify that states retain the ability to regulate AI use and to adjudicate tort claims.
Clear rules help in both directions. Developers get predictable rules for building and deploying models, and states maintain the tools they need to protect residents from concrete harms.
6. Invest in AI talent by supporting workers and educating students
Realizing AI’s economic and social potential requires an AI-ready workforce. That means supporting our workers and students in making the transition to an economy where success depends on possessing AI skills, just as being able to use the internet is essential for economic success today.
Supporting the transition to an AI-ready workforce includes several components:
Supporting workforce development initiatives that provide training on the use of AI technologies for workers, including by supporting reskilling and upskilling programs and by implementing partnerships with the private sector to offer industry-recognized certifications and clear on-ramps to jobs.
Establishing public-private partnerships to create opportunities for AI-ready workers and to support curriculum development modeled on relevant real-world skills.
Creating programs that provide certifications, apprenticeships, and internships to close the gap between classroom learning and practical, employable skills. Lawmakers should modernize the 80-year-old National Apprenticeship Act, as the current system wasn’t designed for new technologies like AI.
Implementing AI literacy in K-12 curricula to empower future generations of Americans to succeed in an AI-driven economy, including by strengthening STEM education, introducing age-appropriate machine learning concepts, and promoting responsible use of AI tools.
7. Invest in infrastructure: compute, data, and energy
A federal framework can play an important role making AI markets more competitive. One option is to establish a National AI Competitiveness Institute (NAICI) that can help lower barriers to entry for entrepreneurs, small businesses, researchers, and government agencies. NAICI could offer access to compute, curated datasets, benchmarking and evaluation tools, and open-source software environments. Shared infrastructure of this kind reduces redundancy and gives smaller projects a credible way to experiment, iterate, and grow.
Open data sets might be particularly valuable. NAICI users might have access to open data repositories of non-personal data, and the government might ensure that these data sets include access to government-funded research. As part of this initiative, the government could prioritize making its own data sets available for AI training and research, where lawful and appropriate, and could create an “Open Data Commons” of data pools that are managed in the public’s interest.
Energy is another structural constraint. Large-scale AI models are compute- and energy-intensive, so a federal framework should help to increase energy abundance, while also ensuring that startups are not priced out or crowded out. Energy policies should be structured so that neither consumers nor Little Tech is saddled with the costs of hyperscalers’ energy needs without seeing commensurate benefits.
8. Invest in AI research
The relationship between academic research and AI product development has always been tight. Breakthroughs in universities and public labs often seed the companies and tools that define each new generation of technology. Supporting that research is therefore critical to long-term innovation in both the public and private sectors.
Government support should prioritize foundational and disruptive AI research. That could include dedicated funding streams for moonshot projects—high-risk, high-reward efforts that challenge current paradigms—and a balanced portfolio that spans near-term, medium-term, and long-term horizons.
Promising topics range widely: how to design effective worker-retraining programs for an AI-intensive economy; the role of open-source tools in promoting competition and security; the use of AI to defend against cyber threats; and the potential for AI to improve the delivery of government services. Structured, public research on these questions can inform policy and shape more effective products.
To maximize impact, federal grants should, where possible, require that non-sensitive research data be shared in machine-readable formats under licenses that permit AI training and evaluation. Making this research available will turn public funding into public infrastructure.
9. Use AI to modernize government service delivery
AI has the potential to improve how the government operates and how it delivers services to the public. Each federal agency should develop a clear, time-bound plan for how it will use AI to improve operations—both by enhancing impact and lowering costs—while maintaining public trust.
As part of this plan, agencies should conduct regular assessments of their workflows to identify where AI can automate routine tasks, improve analysis of large datasets, and support better decision-making. In some cases, agencies may need to procure AI tools to assist them with these modernization efforts. Any procurement process should be designed so that it is accessible to Little Tech, and should not prohibit the acquisition of open source tools where appropriate.
Agencies should also implement pilot projects that allow them to test and evaluate AI tools in specific functions before deploying them at scale. These pilots should include clear metrics for evaluating impact. Where appropriate, agencies should consult with external experts on design, implementation, and evaluation of these pilot projects.
Any internal government use of AI should adhere to usage policies promulgated by the Office of Management and Budget. These policies should be updated regularly to reflect lessons learned from pilots, agency implementations, and evolving technical and legal standards.
A call to action: Congress should enact federal AI legislation
The time for congressional action is now. Millions of Americans use AI regularly, and there is an increasingly broad consensus that this technology has the power to benefit our economy and society. We know the U.S. must win the global AI race. Americans want their representatives to act to create a safe, thriving market, one that positions America to lead the world in AI.
Inaction poses other risks. Staying on our current path will produce AI markets that are less competitive and more concentrated, and will therefore compel people to use AI products that are worse and less innovative.
Congress doesn’t have to decide between protecting people and protecting competition. With the right priorities and policies in place, it can do both: create a comprehensive framework to protect kids and adults from the harms of AI, while keeping the door open for new entrants to build, innovate, and succeed.
This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at a16z.com/disclosures. You’re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately.








