AI’s Promise and Peril for the U.S. Government

AI’s Promise and Peril
for the U.S. Government
David Freeman Engstrom, Daniel E. Ho,
Catherine M. Sharkey, and
Mariano-Florentino Cuéllar
While the use of artificial intelligence (AI) spans the breadth of the U.S. federal
government, government AI remains uneven at best, and problematic and
perhaps dangerous at worst. Our research team of lawyers and computer
scientists examined AI uses among federal administrative agencies – from facial
recognition to insider trading and health care fraud, for example. Our report,
commissioned by the Administrative Conference of the United States and
generously supported by Stanford Law School, NYU Law School, and Stanford’s
Institute for Human-Centered AI, is the most comprehensive study of the subject
ever conducted in the United States. The report’s findings reveal deep concerns
about growing government use of these tools, and so we suggest how AI could
be unleashed to make the federal government work better, more fairly, and at
lower cost.
In March 2019, the Stanford Institute for Human-Centered Artificial Intelligence funded
research exploring the topic of AI’s growing role in federal agencies.
The projected
culminated in the 122-page report, “Government by Algorithm: Artificial Intelligence
in Federal Administrative Agencies,” which was commissioned by the Administrative
Conference of the United States, an agency that provides advice across federal agencies.
In the big picture, AI promises to transform how government agencies do their work by
reducing the cost of core governance functions, improving decision-making, and using the
power of big data for greater efficiency. Many benefits exist. In the enforcement context,