We need to to treat it as a tool with various applications, not a thing in itself.
From Brussels to Beijing, the Élysée to the Oval Office, leaders around the world are calling for bold regulations to govern artificial intelligence (AI). To hear some of them, it may seem as if the machine learning tools that make up modern AI are the latter-day invention of fire, not upgrades to existing math and code. But regarding AI as a truly singular technology is a mistake, one that puts us at risk of missing out on its potential while also inviting algorithmic dystopia. If we’re going to govern AI, we need to recognize it for what it is: a tool, with innumerable uses. And that means we need to govern it for the ways people actually use it, and not as a phenomenon in and of itself.
The new federal regulatory guidance, out Tuesday from the White House’s management office, helps illustrate what’s at stake here. Its goal is to build and sustain public trust in AI. The document is notable for what it lacks: its “principles for stewardship” are not a set of aspirations for ethical AI but instead a series of government procedures aimed at ensuring that AI applications get proper public and private vetting. That’s the document’s most important contribution to the AI conversation: It requires agencies to catalogue and defend against the risks created by these inherently imperfect systems.
This is good news, because when it comes to misbehaving AI, there’s plenty to be concerned about. It’s increasingly clear that AI systems can be biased — or just not up to the tasks we’re assigning to them — and that’s a problem when they’re put into roles of public trust, such as issuing criminal sentences or distributing welfare. In the past few years, researchers have fooled widely used computer vision systems into believing a turtle is in fact a rifle and discovered another is hamstrung trying to classify dark-skinned or female faces — and that’s just work at MIT.
Consumer and civil rights advocates have long voiced concerns about what happens when such flawed systems become commonplace. For example, while trustworthy AI might get more loans into the hands of those who need them, untrustworthy AI can stack the system against borrowers with endless and unfair denials. Similarly, very untrustworthy AI might lead to wrongful arrests based on bad facial recognition, or traffic crashes due to faulty calculations.
Five myths about artificial intelligence
This is all to say that embracing AI cannot mean dispensing with existing consumer protections. “Many AI applications do not necessarily raise novel issues,” the new guidance notes, cautioning agencies against jettisoning “long-standing Federal regulatory principles.” That approach might have helped prevent earlier missteps, such as the Department of Housing and Urban Development’s eyebrow-raising draft regulations, which would have effectively given real estate developers a means to sidestep fair housing laws if they employed even primitive AI systems. But by excluding the government’s own use of AI, the White House guidance misses an opportunity — and risks losing public trust in the technology just as cities like San Francisco and Portland, Ore, rush to ban government use of AI-driven facial recognition.
As we move ahead, the federal government urgently needs to work on crafting substantive, tailored AI policies that look at the ways these technologies are used in public contexts as well as private ones. Some departments and agencies are already taking steps in the right direction: focusing narrowly on encouraging innovative, positive uses of AI while applying existing safeguards to prevent harm from its abuse. Such a narrow approach allows the Consumer Financial Protection Bureau to green light a new alternative lending scheme giving borrowers a “second look”— while preventing such a system from being used to deny a typical mortgage. Rather than generically proclaim that AI should “aid in the national defense,” a narrow approach allows the Department of Defense to put AI in the maintenance bay without handing it a pistol. These are the kinds of sophisticated, context-specific analyses that every agency needs to engage in, and the only way to weigh the technology’s benefits against the harms inherent in its limitations.
In collaboration with our international partners, we must move from principles to practice — taking the more than 30 documents outlining “AI Principles” and “AI Ethical Guidelines” about the technology, and turning them into concrete guidance about AI’s use. While high-level approaches are valuable, they too often treat today’s AI as something independent and more novel than it really is. That can lead to distracting attempts at omnibus AI legislation, such as the kind that the new European Commission president, Ursula von der Leyen, has called for in her first 100 days. A savvy European Parliament might instead present her with 20 or 30 new bills, grounded in existing law and addressing the benefits and risks of AI specific to every subsector of the economy.
Stop calling artificial intelligence an arms race
The risks of an omnibus approach become clearer when we try to apply well-intentioned principles to specific, difficult policy trade-offs. Many regard European data privacy law as creating a comprehensive “right to explanation,” requiring a human-readable justification for decisions rendered by AI systems. In some situations, that’s an entirely appropriate demand, but in others it may unduly limit breakthrough innovations. A system that can accurately diagnose cancer by identifying enormously complex patterns in a patient’s medical records, for example, might not be able to explain how it came to that conclusion. If it were effective and safe, however, shouldn’t it be lawful, even if it isn’t interpretable? Such a system may not be far off. And while opaque medical AI might sound ominous, remember that scientists are still not entirely sure how Tylenol works to control pain, either. We probably need to understand every decision an autopilot system makes to ensure aviation safety, but in the case of medical diagnosis, explanation might not be as important as lifesaving accuracy.
These are the concrete trade-offs between AI’s benefits and harms that the whole federal government will need to start evaluating. Done right, they furnish the United States with a unique opportunity to cultivate world-leading AI. There’s precedent — and even bipartisan appeal — in this approach. In 2014, before AI was a daily headline, White House colleagues and I tried to size up how this new technology would change dozens of policy areas — from criminal justice to consumer pricing, housing to health care. The lesson: There is no one-size-fits all approach. Seizing opportunities while preserving our values, as we put it at the time, will look different in different contexts.
No one decries the United States’ lack of a “C++ policy,” because we don’t need a national policy for a programming language. We should be just as skeptical of calls to uniformly regulate — or export-control, or outright ban — the similarly multipurpose tool that is AI. Developing these policies will be hard, technical work. But it’s the only way we can weigh values in conflict and ensure that AI systems are used for us — and not against us.
R David Edelman, a former special assistant to the president for economic & technology policy, leads research on AI policy at MIT’s Internet Policy Research Initiative.