Human mental states in military personnel to detect bias in human-autonomy teaming

Funding opportunity

Human mental states in military personnel to detect bias in human-autonomy teaming.

Anticipated timeline and budget

  • Application Deadline:
  • 3 December 2024
  • Estimated Project End Date:
  • 31 March 2025
  • Grant funding available:
  • $75,000.00

Background

Platform systems on ships, aircraft, and tanks in the Canadian Armed Forces (CAF) are becoming increasingly automated. Managing these automated platform systems requires operators to develop the skills, knowledge, and abilities to know when to trust automated systems against, for example, cyber-attacks. Trust in automation is influenced by biases. The bias can be, for example, encoded into artificial intelligence (AI) algorithms or learned from the datasets used to train them without the designers, developers, or testers’ awareness. Bias can also emerge from small datasets and sparse data. These limitations requires that AI systems will need to be carefully designed by humans to achieve their desired utility for military operations. It is crucial to determine if multiple psychophysiological and behavioral signals can infer mental or psychological states in human decision-making to detect machine learning (ML) and human models, and how adversaries can exploit these biases.

Researchers are invited to apply to undertake an environmental scan and provide recommendations for a follow-up experiment investigating the emerging research, best practices and key attributes of mental states involved in bias detection to support military personnel to improve human decision-making when interacting with autonomous agents in military-relevant environments.

Research objectives

This funding opportunity seeks submissions to conduct an environmental scan of academic literature, public documents regarding emerging trends, best practices, and challenges for human mental states to detect bias in human decision-making, including recommendations for an experiment examining these features in the CAF.

Final report to include the following:

  • Outline new and emerging research, best practices, and approaches for psychophysiological and behavioral signals to infer mental or psychological states (e.g., stress, fatigue, mental workload, trust, situation awareness, and shared mental models) in human-decision making;
  • Provide examples of encoding of bias in ML and human models to gain advantage in human decision-making;
  • Outline new and emerging research, best practices, and approaches for optimized performance, ensure resilient and stable team member states, and support the capability to appropriately build, calibrate, and maintain trust of ML and human models;
  • Compare and contrast the psychological states that impact bias decision-making and summarize the known relationships they have with physiological and behavioral measures that can be captured in real-time with noninvasive or wearable technologies;
  • Outline new and emerging research, best practices, and approaches for how best to address bias that emerges from small datasets and sparse data;
  • Provide recommendations for Gender-Based Analysis Plus in regards to understand the effect of trust that factors diverse groups of people, taking into account sex (i.e., biological assignment at birth), gender (i.e., how a person identifies), as well as intersecting identity factors that include race, and LGBT+ (lesbian, gay, bisexual, transgender, and other sexual or gender minorities), black, indigenous, and people of color;
  • Provide recommendations for experimental design of a follow-up study from the end-user and leadership perspective based on the identified attributes and features from the environmental scan; and
  • Provide bibliography of documents used to produce the final report.

Milestones/phases of progress

See table below.

Desired outputs

Desired outputs
Number Description of Milestone Quantity and Format Delivery Date
1 Submit a research report outlining the key attributes and features to fool ML and human models to gain advantage via human-AI biases involving mental states as identified in the environmental scan and recommendations on how to systematically evaluate AI systems to achieve their desired utility and to determine how defenders can exploit such biases. Presented in English via teleconference with DRDC and electronic format, MS Word document December 31, 2024
2 Submit the design for a follow-up experiment of the key features of bias detection involving mental states in human decision-making when interacting with autonomous agents suitable for publication in the open literature. Presented in English via teleconference with DRDC and electronic format, MS Word document. March 1, 2025
3 Attend an initial project meeting, and monthly update meetings to report on progress to date on drafts of documentation in tasks 1 and 2, and a final close-out meeting Teleconference with DRDC March 28, 2025

Applicant qualifications and requirements for selection

  • Proposals must be led by a senior investigator with a PhD in Computer Engineering or a relevant field, specifically in health psychology/behavioral medicine, human-computer interaction, human factors;
  • Applicant has previous experience and extensive knowledge on trustworthiness in human and AI/ML modeling in the CAF;
  • Applicant has extensive open-literature publications of original research on how AI biases influence human biases and vice versa, ensure that the interdependencies between humans and AI are understood for ethical and safe outcomes from, for example, cyber-attacks;
  • Applicant has detailed knowledge of potential biases in military operations and the weaknesses these biases represent for defence;
  • Applicant has at least one publication utilizing experimental methodology;
  • Applicant has at least one of the following: publications, research grants, awards or projects, as a proven record of carrying out AI/ML modelling research in the CAF that utilize multiple physiological and behavioral signals to infer mental or psychological states (e.g., stress, fatigue, mental workload, trust, situation awareness, and shared mental models) in human decision-making;
  • Applicant has access to a variety of online databases; and
  • Applicant has strong expertise in experimental design, as demonstrated by peer-reviewed publications.

Application deadline

Please download and submit the Research Funding Application form.

Enquiries

Questions about this funding opportunity can be sent to the VAC Research office at research-recherche@veterans.gc.ca.

References

Amanat, H. (2023). Cyberattacks increased 20 per cent in Canada last year: IT security company. CTV News. https://www.ctvnews.ca/sci-tech/cyberattacks-increased-20-per-cent-in-canada-last-year-it-security-company-1.6224097 (Last accessed date: 09 Jan 2023).

Department of National Defence (2019). Royal Canadian Navy cyber strategy 2020-2025 / RDIMS#436259. Ottawa: Director of Naval Information Warfare.

Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human-human and human automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8, 277–301.

National Academies of Sciences, Engineering, and Medicine. (2021). Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press.