Human-machine teaming (HMT) encompasses a wide, multi-disciplinary area of study that incorporates cognitive science, computer science, and human factors psychology to better understand how humans interact with technology to support common goals. Increasingly, intelligent AI agents and machines are becoming commonplace in workflows, requiring symbiotic collaboration between machines and human operators. The goal of developing these human-machine collaborations is to produce results that exceed what either agent could accomplish in isolation. As intelligent technology is increasingly leveraged, it is critical to understand and model human cognition and machine cognition, as there are some overlapping commonalities as well as key cognitive processing differences that can lead to misunderstandings and errors. The study of HMT further focuses on ensuring there is effective communication between humans and machines, such that humans can adequately calibrate trust in the machine. This is done by providing sufficient transparency into how the machine operates cognitively (i.e., how it categorizes information, makes decisions, etc.). The study of human-machine teaming has been applied to a variety of fields, including aviation, robotics, logistics, chemical processing, and the power industry. Parallax researches HMT in the following ways:

Human and Machine Cognitive Modeling

Cognitive modeling is a means of formalizing the processes that underlie how humans and machines think, reason, and make decisions. This is essential for providing a framework of understanding for how multiple agents understand and approach a common problem. This is a technical area of great complexity due to the variability in human cognition and the impacts of individual difference factors on cognitive processes.

Error Detection and Error Mitigation

Humans and machines differ in the methods or approaches used to solve similar problems, and perturbations or interruptions at different stages of these processes can have deleterious effects on performance and efficiency. Further, the increased complexity of communication between human and machine agents and lack of full mutual understandability can lead to novel errors. The study of HMT encompasses error detection and development of mitigation strategies to ensure that teams of humans and machines can produce and maintain results.

Establishing and Maintaining Trust

Expert human operators may be reticent to incorporate novel intelligent technologies into their workflows, particularly if they do not have a comprehensive understanding of how the AI agent thinks and makes decisions. It is critical to provide transparency into AI algorithms in a manner that is not only comprehensive but is also comprehensible to a human. Utilizing Natural Language Processing (NLP) affords better communication between humans and technology by allowing the machine to better understand what the human is trying to communicate verbally, as well as incorporating a model of the context to improve understanding.

Applied HMT research spans a broad spectrum of problem areas including: 

  • Modeling human and AI cognitive processing
  • Human-machine communication and interaction
  • Human and machine error detection and error mitigation
  • Increasing task complexity and impact on cognitive load
  • Dynamically calibrating trust in automation
  • Distributed teaming and asymmetric information
  • Improving teamwork competencies for humans and machines

 Technical Lead

Senior Research Psychologist, Human Cognitive Performance

Dr. Mary Frame