Generative AI models are far from perfect, but that hasn’t stopped businesses and even governments from giving these robots important tasks. But what happens when AI goes bad? Researchers at Google DeepMind spend a lot of time thinking about how generative AI systems can become threats, detailing it all in the company’s Frontier Safety Framework. DeepMind recently released version 3.0 of the framework to explore more ways AI could go off the rails, including the possibility that models could ignore user attempts to shut them down.
DeepMind’s safety framework is based on so-called “critical capability levels” (CCLs). These are essentially risk assessment rubrics that aim to measure an AI model’s capabilities and define the point at which its behavior becomes dangerous in areas like cybersecurity or biosciences. The document also details the ways developers can address the CCLs DeepMind identifies in their own models.
Google and other firms that have delved deeply into generative AI employ a number of techniques to prevent AI from acting maliciously. Although calling an AI “malicious” lends it intentionality that fancy estimation architectures don’t have. What we’re talking about here is the possibility of misuse or malfunction that is baked into the nature of generative AI systems.
From Ars Technica - All content via this RSS feed