GAO Technology Assessment: Artificial Intelligence - Emerging Opportunities, Challenges, and Implications

Artificial intelligence graphic (nist.gov, N. Hanacek)

Artificial intelligence graphic (nist.gov, N. Hanacek)

Government Accounting Office (GAO) releases technology assessment report, GAO-18-142SP, March 2018, to the House of Representatives, Committee on Science, Space and Technology, based on recent forum convened by the Comptroller General to discuss artificial intelligence (AI).


Artificial intelligence (AI) holds substantial promise for improving human life and economic competitiveness in a variety of ways and for helping solve some of society’s most pressing challenges. At the same time, according to experts, AI poses new risks and could displace workers and widen socioeconomic inequality. To gain a better understanding of the emerging opportunities, challenges, and implications resulting from developments in AI, the Comptroller General of the United States convened the Forum on Artificial Intelligence, which was held on July 6 and 7, 2017, in Washington, D.C.

At the forum, participants from industry, government, academia, and nonprofit organizations considered the potential implications of AI developments in four sectors—cybersecurity, automated vehicles, criminal justice, and financial services. Participants considered policy implications of broadening AI use in the economy and society, as well as associated opportunities, challenges, and areas in need of more research. Following the forum, participants were given the opportunity to review a summary of forum discussions and a draft of this report. Additionally, a draft of this report was reviewed independently by two experts who did not attend the forum. The viewpoints expressed by individuals in the report do not necessarily represent the views of all participants, their  organizations, or GAO.

Findings - Opportunities, Challenges and Issues for Further Consideration

Forum participants noted a range of opportunities and challenges related to artificial intelligence (AI), as well as areas needed for future research and for consideration by policymakers. Regarding opportunities, investment in automation through AI technologies could lead to improvements in productivity and economic outcomes, similar to that experienced during previous periods of automation, according to a forum participant. In cybersecurity, AI automated systems and algorithms can help identify and patch vulnerabilities and defend against attacks. Automotive and technology firms use AI tools in the pursuit of automated cars, trucks, and aerial drones. In criminal justice, algorithms are automating portions of analytical work to provide input to human decision makers in the areas of predictive policing, face recognition, and risk assessments. Many financial services firms use AI tools in areas like customer service operations, wealth management, consumer risk profiling, and internal controls.

Forum participants also highlighted a number of challenges related to AI. For example, if the data used by AI are biased or become corrupted by hackers, the results could be biased or cause harm. The collection and sharing of data needed to train AI systems, a lack of access to computing resources, and adequate human capital are also challenges facing the development of AI. Furthermore, the widespread adoption of AI raises questions about the adequacy of current laws and regulations. Finally, participants noted the need to develop and adopt an appropriate ethical framework to govern the use of AI in research, as well as explore factors that govern how quickly society will accept AI systems in their daily lives.

After considering the benefits and challenges of AI, forum participants highlighted several policy issues they believe require further attention. In particular, forum participants emphasized the need for policymakers to explore ways to (1) incentivize data sharing, such as providing mechanisms for sharing sensitive information while protecting the public and manufacturers; (2) improve safety and security (e.g., by creating a framework that ensures that the costs and liabilities of providing safety and security are appropriately shared between manufacturers and users); (3) update the regulatory approach that will affect AI (e.g., by leveraging technology to improve and reduce the burden of regulation, while assessing whether desired outcomes are being achieved); and (4) assess acceptable levels of risk and ethical considerations (e.g., by providing mechanisms for assessing tradeoffs and benchmarking the performance of AI systems). As policymakers explore these and other implications, they will be confronted with fundamental tradeoffs, according to forum participants. As such, participants highlighted several areas related to AI they believe warrant further research, including (1) establishing regulatory sandboxes (i.e., experimental safe havens where AI products can be tested); (2) developing high-quality labeled data (i.e., data organized, or labeled, in a manner to facilitate their use with AI to produce more accurate outcomes); (3) understanding the implications of AI on training and education for jobs of the future; and (4) exploring computational ethics and explainable AI, whereby systems can reason without being told explicitly what to do and inspect why they did something, making adjustments for the future.

For more information on the three waves of AI, see A DARPA Perspective on Artificial Intelligence.

For more information on explainable AI, see DARPA's Explainable Artificial Intelligence (XAI) Program.