What AI Policy Can Learn From Cyber: Design for Threats, Not in Spite of Them

By Camille Stewart Gloster and Afua Bruce

If you want to understand why regulatory guardrails can supercharge, not stifle, technological innovation, don’t look to theory. Look to cybersecurity. The field is, by definition, mission-critical: cybersecurity keeps our technical infrastructure resilient, protects financial institutions, and allows both individuals and businesses to leverage the internet safely. Cybersecurity methods must evolve quickly, or our critical infrastructure could be at risk.

For decades, cybersecurity has been a proving ground for innovation despite many constraints. It has faced decentralized architectures, hostile threat actors, a fragmented policy landscape, and sprawling systems beyond any one entity’s control. As with many technical tools before, the challenges didn’t paralyze progress — rather, they drove people to invent new technologies and methods. And the innovations that emerged, like zero trust architecture, weren’t built in spite of policy pressure and hard constraints. They were built because of them.

While many are hailing the recent decision to strike the 10-year moratorium on state AI laws from the Senate’s budget bill as a step in the right direction, it’s far from the end of the debate. The instinct to preempt state action remains strong in Republican-controlled Washington, often cloaked as a desire to avoid a “patchwork” of regulation. But that patchwork, messy as it may be, is often where the real progress begins. Good policy doesn’t just keep bad tech in check. It makes better tech possible. And the antidote to a counterproductive patchwork is a federal baseline that sets a clear and consistent standard.

As we navigate a perception in Washington that guardrails stifle innovation, we should ask: What did we learn from cybersecurity? The answer should be obvious. Innovation didn’t die because of oversight. It flourished under it.

Cybersecurity is a case study in how innovation thrives not in regulatory vacuums but in thoughtfully constrained, collaborative ecosystems. Cybersecurity policies, such as the California Consumer Privacy Act, were written because policymakers, practitioners, community advocates, consumers, and businesses all acknowledged what was at stake. They recognized that as we as a society became increasingly reliant on technology, we needed guardrails to direct the development of tools.

Consider the golden child of modern cybersecurity: zero trust architecture. In the old model of network security, anyone who got inside a computer system’s digital boundary was assumed to be trustworthy. That model crumbled under the weight of cloud computing, remote work, and global supply chains, which prompted attackers to find new ways in using stolen passwords, mistakes in cloud setups, or hidden malware in software updates. Engineers could no longer control the perimeter, because the perimeter didn’t exist —the line between “inside” and “outside” the organization was gone. They could no longer control or fully see the systems they were building.

Read the rest at Tech Policy Press.

Previous
Previous

Power in Purposeful AI: You Don’t Need a Foundation Model to Lead in AI

Next
Next

Shadow AI Is Already Inside Your Org. Here’s What to Do About It.