Learning from other Domains to Advance AI Evaluation and Testing: Cybersecurity Standards and Testing — Lessons for AI Safety and Security
- Stewart Baker
There was little need to worry about the security of computer systems until the 1960s. Before that, computers were hulking machines locked in a room that only a few trusted boffins could enter. That all changed when time-sharing debuted, allowing multiple users to use the computer at more or less the same time. That posed the risk that they’d start looking over each other’s shoulders. And that led defense and intelligence customers to wonder how they could protect their classified data from ordinary users.
Sixty years on, we are still trying to answer that question.
Many experts were sure that the answer was to set security standards and enforce them by testing systems to see whether they met those standards. That is still the closest thing we have to an answer, but it hasn’t been a very good one; it’s at best a partial success. The story of its failures is in some ways the story of politics and policy writ large at the turn of the twenty-first century; as such, it may also tell us a lot about how AI safety and security standards will succeed, and fail.