Skip Navigation
Search

Amanda Muller, PhD

Artificial Intelligence Systems Engineer and Technical Fellow
Northrop Grumman Mission Systems

Dr. Amanda Muller is a Senior Staff Artificial Intelligence (AI) Systems Engineer and Technical Fellow based in Northern Virgina. Dr. Muller currently serves as the Secure and Ethical AI Lead for Northrop Grumman. In this role, she is responsible for coordinating the strategy, policy, and governance efforts related to Artificial Intelligence across the Northrop Grumman enterprise. As a Mission Systems Technical Fellow specializing in User Experience and Human-Systems Integration, she also serves as a subject matter expert on proposals, program reviews, and research efforts.Prior to her current role, Dr. Muller worked for Northrop Grumman Space Systems in Redondo Beach, California, as a Systems Engineer. She led the User Experience teams for several restricted space programs, conducting user research in operational environments around the world. Previously, Dr. Muller served as a Systems Engineer on State Health and Human Services programs, as a Human Factors Engineer in Aurora, Colorado, and as the Human-Systems Integration lead for airborne platforms in Melbourne, Florida. In addition to her program roles, Dr. Muller has been a mentor in the Mentoring the Technical Professional program for over seven years.

Dr. Muller’s publications include a book chapter in Emerging Trends in Systems Engineering Leadership: Practical Research from Women Leaders (in press), and peer-reviewed articles in Information Fusion, Journal of Defense Modeling and Simulation, WSEAS Transactions on Advances in Engineering Education, and the Annals of Biomedial Engineering.

Dr. Muller holds a Ph.D. in Engineering from Wright State University in Dayton, Ohio, and B.S. and M.S. degrees in Biomedical Engineering from Worcester Polytechnic Institute in Worcester, Massachusetts. She also holds a graduate certificate in Design Thinking for Strategic Innovation from Stanford University. Dr. Muller is a Certified Systems Engineering Professional (INCOSE), Professional Scrum Master (Scrum.org), and is certified in Professional Scrum with User Experience (Scrum.org).

Abstract

Secure and Ethical AI: Framing the Challenge for National Security

Bias in AI is well-studied in commercial applications. In addition to numerous academic papers on the subject, the issue has pervaded the popular awareness as well. Netflix’s documentary Coded Bias examined the impact of bias in facial recognition technology, shining an even brighter light on the work of academics like Rama Chellappa of Johns Hopkins University. Popular news media highlighted the racial and gender bias exhibited by the ill-fated Microsoft TAY Twitter bot, which began outputting hateful speech within hours of being set loose on the social media platform. Identifying and counteracting data and algorithmic bias for commercial applications may not be fully resolved, but much work is (rightfully) being done to address it.

In national security applications, however, the nature of AI bias is much less fully understood. The national security realm not only has the same potential for bias as the commercial world, but also has the potential for biases unique to the domain. Insufficient access to data in adversarial environments creates the strong potential for representation bias, which is further exacerbated by institutional stovepipes. Some of these stovepipes are necessary due to security concerns (e.g., the need to separate classified from unclassified data), while others are mandated by law (e.g. Title 10 for Department of Defense vs. Title 50 for the Intelligence Community), and still others are artifacts of long-standing bureaucracy. In addition, the potential for the introduction of bias with malicious intent by adversarial actors is a constant threat within the national security realm.

Because of these unique challenges, addressing bias in national security AI applications requires a different set of strategy and governance processes from what is used in the commercial world. For example, audit rules and bias testing from financial systems could be tailored for use in defense, but must be streamlined for rapid application in the defense environment. Deep knowledge of the mission space is essential to ensuring that anti-bias testing and bias mitigation are executable within the complexities of the national security domain. Collaboration between government, academia, and industry is needed to leverage best practices and adapt them to the unique needs of national security. This collaboration should lead to the development of national security-specific laws and requirements that will allow contractors to develop AI systems that effectively identify and mitigate both intentional and unintentional bias.  This presentation will highlight the ways in which commercial developments in anti-bias may be leveraged for national security, the bias challenges that are unique to the national security domain, and the need for strategy and governance specifically for national security applications.