Approved for public release; distribution is unlimited
Beavercreek, OH–Parallax Advanced Research, the 501(c)3 nonprofit research institute, has won a Defense Advanced Research Projects Agency (DARPA) In the Moment (ITM) award totaling $4.067 million. ITM seeks to understand how humans can develop trustworthy artificial intelligence (AI) for making difficult decisions in domains where there is no agreed upon right answer. The Parallax research team is working on ITM Technology Area 2 (TA2). Parallax will develop human aligned algorithmic decision-makers that can adapt to different types of decision makers and exhibit key decision-making attributes that support trust. Parallax is partnering with Drexel University and Knexus Research Corporation on the development of ITM.
The Parallax team’s research under ITM will focus on human-aligned decision-making in small-unit triage in austere environments and mass casualty care. This task is challenging because decisions must be made under time pressure with limited resources, and conflicting values mean that there is rarely a “correct” answer; even experts frequently disagree about many decisions. Parallax’s research combines multiple complex decision-making technologies based on artificial intelligence and machine learning in the medical triage space to make decisions in circumstances where trained medical personnel are unavailable.
Trustworthy Algorithmic Delegate (TAD)
The Parallax research team, led by Dr. Matt Molineaux, director of AI and Autonomy, is conducting fundamental research to develop the Trustworthy Algorithmic Delegate (TAD), an innovative Explainable Case-Based Reasoning (ECBR) approach to difficult decision making. TAD’s innovative ECBR system emulates a human decision-making process to make aligned decisions that can be trusted, as measured by an expert’s willingness to delegate decision-making authority. The more aligned the AI decision-makers are with their human counterparts, the more trust the human decision-maker will have in the algorithm.
“The idea is that, where necessary, the human operator will trust that the AI decision-maker will make decisions that align with what varied experts think should happen,” said Dr. Viktoria Greanya, chief scientist at Parallax.