Decision advantage is the new battlespace. As information multiplies across domains, decision-makers—from commanders in the field to analysts—are tasked with translating overwhelming complexity into timely, accountable action. Yet while AI has become synonymous with generative chatbots and content creation, Parallax Advanced Research and the Ohio Aerospace Institute (OAI) are pursuing a different frontier: AI that informs, not replaces, human judgment.
Dr. Matthew Molineaux, director of AI/Autonomy at Parallax/OAI, defines AI-informed decision making as the use of transparent, goal-driven reasoning systems that help humans understand the implications of their choices in real time.
“We look at decision making in two ways,” Molineaux said. “One is helping a person make better decisions—decision aids. The other is enabling an AI to make better decisions autonomously when humans aren’t available. Our work focuses on both: we’re developing smarter systems and tools that help people think better and act faster.”
Reducing Cognitive Load
AI-informed decision systems are not about automation for its own sake—they are about amplifying human cognition.
“When an air traffic controller or mission operator is overloaded, much of their attention is spent on things a computer could easily monitor,” Molineaux said. “Our tools help with attention management by directing human focus toward the operational center of gravity – where attention and action matter most.”
By filtering, fusing, and prioritizing information in real time, these systems allow decision-makers to stay centered on what truly affects their mission. The result is a measurable reduction in cognitive load and faster, more confident responses under pressure.
From Decision Support to Decision Superiority
AI-informed systems are designed to reason, not merely generate content. They model goals, outcomes, and world dynamics to predict how choices will unfold over time. Decision superiority arises when this understanding enables commanders to decide and act faster and more effectively than adversaries—compressing decision cycles, exploiting fleeting opportunities, and shaping conditions before opponents can respond. The result is a clearer grasp of what is happening now, what may happen next, and how one’s actions can impose advantageous futures on the operational environment.
At the core of Parallax/OAI’s approach is goal reasoning: the ability for AI systems to understand, prioritize, and adapt goals as situations evolve.
“When you have a decision to make,” Molineaux said, “you first need to understand what your goals are and how they can be accomplished through planning, learned behaviors, and breaking them down into subgoals. Our brains strategize like this intuitively, but to make better decisions, we need to teach AI to do this, too.”
Having a goal-based framework allows for adaptive replanning—a critical capability when missions face unexpected events. Rather than discarding entire plans, Parallax/OAI’s algorithms identify the minimal disruptive change needed to recover from surprises. This ensures that systems can respond quickly and efficiently without destabilizing ongoing operations.
That same philosophy underpins Parallax/OAI’s work on modeling “unknown unknowns.” These are the unpredictable disruptions—black swan events—that can upend even the best-laid strategies.
“Humans are adaptive enough to handle unknown unknowns,” Molineaux said. “Our AIs can be too, and together we’re better than either working alone. We can make decisions more rapidly and with more options. When something new appears, the system asks: Does this threaten my goals? Can I still accomplish them? What should I change in my world model to predict better next time?”
Engineering Trust for Accountable Decisions
Trust remains one of the greatest barriers to AI adoption in national security. Parallax/OAI addresses this through non-hallucinatory, transparent models that explain why a decision or recommendation was made.
“Our tools can describe their reasoning pathways,” said Molineaux. “Users can trace exactly how a conclusion was reached and review it afterward. That’s fundamentally different from the black-box nature of generative AI.”
Such transparency allows for after-action review, ethical evaluation, and accountability—without sacrificing speed.
“You don’t have to give up accountability to move fast,” he said. “You just need to surface uncertainty, so users know when to trust the AI and when to question it.”
Why It Matters
To be trustworthy, these systems need to be transparent, controllable, self-critical, and communicative. As defense agencies pursue Joint All-Domain Command and Control (JADC2) and other data-driven modernization efforts, AI-informed decision making offers a path to operational agility. Parallax/OAI’s approach—goal-driven, adaptive, and transparent—bridges human intuition and machine precision.
About Dr. Matt Molineaux
Dr. Matt Molineaux is the Director of Al and Autonomy at Parallax/OAI, leading the development of the Trustworthy Algorithmic Delegate (TAD). Dr. Molineaux's work focuses on integrating Al with human decision-making processes to improve emergency response and medical care.
###
About Parallax Advanced Research & the Ohio Aerospace Institute
Parallax Advanced Research is a research institute that tackles global challenges through strategic partnerships with government, industry, and academia. It accelerates innovation, addresses critical global issues, and develops groundbreaking ideas with its partners. In 2023, Parallax and the Ohio Aerospace Institute formed a collaborative affiliation to drive innovation and technological advancements across Ohio and the nation. The Ohio Aerospace Institute plays a pivotal role in advancing aerospace through collaboration, education, and workforce development. More information can be found at parallaxresearch.org and oai.org.