Understanding Pilots' Perceptions of AI-Mediated Mental Health Support
Mixed-methods
Quantitative analysis
qualitative Research
Thematic analysis
Examined how regulatory constraints and career risks shape pilots' willingness to engage with mental health support, and revealed conditional acceptance on AI tools based on privacy and control.
  • Context : Qualitative Research Project at UW Directed Research Group for aviation mental health. Research design and execution for CHI Poster 2026 submission.
  • Role : UX Researcher (Team of 6)
  • Timeline : Oct 2025 – Jan 2026
  • Methods : Qualitative interviews, thematic analysis, grounded theory
  • Contribution : Conducted quantitative analysis of 254 survey datasets, spearheaded the qualitative research design, iterated interview questions through collaborative feedback, interviewed 4 pilots, conducted thematic analysis on 31 interviews and synthesized insights collaboratively.
92%
Familiar with AI
Pilots are technologically
ready for adoption.
62%
See Value in AI-support
Over half of participants recognize AI’s value for self-reflection and sensemaking.
77%
Distrust Institutional Tools
Significant hesitation exists regarding employer-provided or company-linked resources.
100%
Demand Control
Absolute requirement for data governance and a non-authoritative, pilot-controlled AI role.
Problem: A Silent Crisis in Aviation
Commercial pilots face elevated rates of depression, anxiety, and psychological distress compared to the general population. Irregular schedules, circadian disruption, cumulative fatigue, and high-responsibility decision-making create sustained mental health burdens.
Despite this, mental health challenges remain widely underreported. Stigma, fear of career consequences, and distrust in aeromedical reporting systems discourage pilots from seeking formal care. In the U.S., disclosure can trigger grounding or extended medical review, leading many to delay care or rely on informal coping strategies.
AI-mediated mental health tools show promise, but no empirical research examines how pilots perceive these technologies within their professional context.
Quantitative Research
From December 2024 to July 2025, a large-scale global survey was conducted by the UW Directed Research Group in Aviation Mental Health, led by Kimberly Perkins. The study surveyed 2,268 aviation professionals aged 18–60 across Africa, Asia, North and South America, Europe, and Australia/New Zealand. To understand current mental health challenges that pilots face, I analyzed a subset of 245 verified airline pilots (with 186 based in North America) to identify key mental health challenges and barriers within the profession.
Preliminary Quantitative Findings
Preliminary survey data (n = 245) reveals that perceived career threats and institutional distrust function as formidable barriers to mental health related help-seeking among pilots.
Healthcare Avoidance for Career Preservation
80%
of all respondents intentionally avoid mental health tools or assistance due to concerns that engagement could negatively impact their jobs or long-term careers (n = 196).
58.6%
of all respondents gave up referring to a mental health professional
due to the fear of losing their operational status and/or Medical Certificate (n = 143).
Healthcare Avoidance due to Systemic Distrust
77.6%
of pilots disagree with current regulatory mental health policies, directly impacting help-seeking behavior and the utilization of formal support systems (n = 190).
65.3%
of distrusting pilots refrained from seeking professional counseling or treatment due to the fear of losing their Medical Certificate or operational status (n = 124).
84.7%
of distrusting pilots "Rarely" or "Never" utilize organized resources like unions or EAPs
(n = 161).
Decision to conduct Qualitative Research
The quantitative data clearly illustrates what is happening: a widespread avoidance of formal support due to systemic distrust.
However, to design effective interventions, we needed to understand why these perceptions exist and how pilots navigate these constraints in their high-stakes daily operations. At the same time, AI-based mental health technologies have gained traction in the general population, while little is known about how such technologies are perceived in aviation-specific contexts.
This led us to initiate a qualitative study to uncover the lived experiences and nuanced mental models of pilots regarding both mental health and AI-mediated support.
Qualitative Research Goals + Intention
To bridge the gap between quantitative findings and actionable AI design framework,
I led designing qualitative research protocols with three research goals below:
1
What beliefs, concerns, and barriers shape how pilots’ approach and engage with
mental health support?
2
How do pilots perceive
the role of conversational AI in supporting mental health?
3
What factors shape their trust, openness, and willingness to engage with such tools?
The intention behind the goals are:
RQ1 - Uncovering Lived Barriers & Contextual Stress: Beyond merely identifying "distrust," we explored how pilots define mental health within the aviation community and how the pervasive fear of license forfeiture manifests as specific, tangible stress throughout their day-to-day work cycles.
RQ2 - Evaluating Informal Coping vs. AI Potential: To understand the widespread non-utilization of official resources, we investigated the informal coping strategies pilots currently employ. Furthermore, we gathered candid initial perceptions and experiences regarding Generative AI (LLMs) as a potential alternative to fill this support void.
RQ3 - Discovering potential safe AI utility through Scenario-Based Mapping: To define what constitutes "safe" support that does not threaten a pilot's career, we presented two distinct scenarios:
  • Mild Scenario (Fatigue/Stress): Assessing the potential for AI assistance during routine high-workload situations.
  • Severe Scenario (CRM/Emergency): Exploring the conditions under which AI could be trusted as a tool for emotional regulation or decision support during intense, high-pressure crisis management.
The Approach:
Semi-Structured Interviews + Thematic Analysis
We needed to understand not just what pilots think about AI mental health tools, but why they think that way.
Quantitative data alone couldn't capture the nuanced relationship between regulatory constraints, professional identity, and technology trust.
01
Interview Protocol
60-minute semi-structured Zoom interviews examining lived experiences, attitudes, and reasoning around mental health disclosure and AI-enabled support. Presented two hypothetical scenarios (low and high intensity).
02
Iterative Thematic Analysis
Used grounded theory approach with open coding to identify salient experiences and concerns. Refined codes through iterative comparison, consolidating into structured codebook capturing patterns and divergences.
03
Framework Synthesis
Examined how professional risk, information traceability, coping practices, and AI perceptions interacted across accounts, resulting in socio-technical framework.

Triangulation in Action: By combining interview data with participants' descriptions of existing practices and regulatory contexts, we could validate findings across multiple evidence sources and understand the why behind the patterns.
Screener and Participant Recruitment
Recruitment Strategy
Participants were recruited via opportunity sampling through the lead researcher’s professional networks. Prospective participants completed an initial online screener to assess eligibility, availability, and key background characteristics.
Eligibility Criteria
To ensure a high level of professional expertise and relevance to the study, the following inclusion criteria were applied:
  • Certification: Possession of (or prior history of holding) an Airline Transport Pilot (ATP) certificate.
  • Language: Proficiency in English.
  • Commitment: Willingness to engage in a 60-minute recorded virtual interview.
Screening & Selection Process
A total of 56 pilots completed the screener, from which 13 participants were purposefully selected to ensure a diverse representation of backgrounds. The screening focused on five key dimensions:
  1. Aviation Background: Total flight hours, aircraft type, and typical flight routes.
  1. Demographics: Age, gender, and geographical region.
  1. AI Experience: Prior exposure to and frequency of using Generative AI tools (e.g., ChatGPT, Gemini, Claude).
  1. AI Literacy: Self-reported understanding of AI technologies.
  1. Perception of AI: Comfort level regarding AI-assisted emotional wellbeing support.
Participant Profile
The final cohort consisted of 13 pilots representing commercial airlines across seven countries in Asia, Europe, and North America.
  • Age: Ranged from 25 to 64 years.
  • Gender: 11 males, 2 females.
  • Status: 12 participants hold active ATP certificates, while one is currently inactive due to medical clearance issues.
  • AI Familiarity: 10 participants reported prior experience with Gen-AI tools, while 3 reported no prior usage, allowing for a comparative perspective on AI adoption.
Interview Protocol
Semi-structured interviews were conducted over a three-week period according to the protocol below. Interviews were conducted via Zoom, audio-recorded and transcribed for analysis. One researcher led each interview, with one to two additional researchers taking notes.
The 60-minute virtual interviews were structured into four key pillars:
  • Mental Health in Aviation: Understanding the regulatory and professional pressures unique to pilots from lived experiences.
  • Current Coping & Barriers: Identifying pilots' lived experiences and attitudes towards existing support systems.
  • AI Literacy & Perceptions: Assessing prior experience with Gen-AI and general comfort levels.
  • Scenario-Based Intervention: Evaluating the potential of AI through two distinct stress contexts.
Participants were presented with two pilot-specific hypothetical scenarios (one low-intensity and one high-intensity) and asked to reflect on existing support resources and how AI-mediated mental health tools might assist in each context.
You're working multiple legs per day. These are hard flights; bad weather, ATC re-routes, CRM feels extra taxing.
You are feeling mentally drained and tired after just two legs but you know you have two more to go today.
It's near the end of a long duty period. You and the Captain/First Officer aren't working well together.
Everything they say is getting on your nerves and it seems like they're dismissing your input.
Now you're airborne and there's a technical problem with the plane.
You're running through checklists and following SOPs, but the poor team dynamics feel like it's adding to the fatigue.
You feel your heart rate rise, and the situation feels intensely pressured.
Even while feeling mentally and physically drained, you are required to stay composed, think clearly, and communicate effectively.
Analysis
Interview transcripts were analyzed using an iterative thematic analysis approach using grounded theory. Initial open coding identified salient experiences, concerns, and interpretive frames related to mental health, professional identity, trust, and technology use.
Through iterative comparison across participants, codes were refined and consolidated into a structured codebook capturing recurring patterns as well as points of divergence. The final analysis focused on relational synthesis, examining how professional risk, information traceability, coping practices, and perceptions of AI interacted across accounts.
3 Key Insights: What We Discovered
Our analysis revealed that pilots' engagement with mental health support and AI tools is fundamentally shaped by regulatory and informational consequences, not stigma alone.
Insight 1: Mental Health as Regulated Professional Risk
Pilots frame mental health through the lens of licensure and career stability. Medical certification is inseparable from livelihood. 77% of participants expressed concerns that disclosure could lead to grounding or loss of certification.
"Everything is related to one license. Your medical, it's part of your license. Your license is your way of living, your status, your future, your everything."
Insight 2:
Conditional Acceptance of AI
62% of pilots identified value in AI-supported self-assessment and reflection. However, 62% expressed distrust due to fears of data access by employers or regulators. AI acceptable only when bounded, private, and non-authoritative.
"I would have to understand it well first of all, to see how they can use it against me. Because I don't want what I say to be used against me."
Insight 3: Information Traceability Shapes Trust
Pilots evaluate resources based on who can access shared information and how it's recorded. Employer-provided tools viewed with caution; union programs seen as safer intermediaries. Trust is conditional on maintaining control over information flow.
"If the company issues something for free, I wouldn't trust it. If it's something I can use, I can go and buy it myself."
Quantitative Patterns
10 out of 13 participants described concerns about disclosure leading to professional consequences. 8 expressed distrust of AI tools due to data governance fears. Yet 12 reported familiarity with AI tools, and 7 identified specific value in AI-supported self-assessment.
This reveals a critical tension: pilots are technologically comfortable and see potential benefits, but institutional trust barriers prevent adoption.
The primary concerns are more related to the relationships with the institutional system and documentation, compared to the willingness to engage with AI-mediated mental health tools.
RQ 1. What beliefs, concerns, and barriers shape how pilots’ approach and engage with mental health support?
  • Mental Health as a Regulated Professional Risk: Pilots perceive mental health not merely as a personal wellbeing issue, but as a "regulated risk" inextricably linked to their medical certification and livelihood.
  • Fear of Career Consequences: The primary barrier to engagement is the fear that disclosure will trigger "grounding" or an extended medical review process, which is seen as potentially career-ending.
  • Distrust of Information Traceability: Engagement decisions are heavily influenced by who has access to the data. There is a profound distrust of employer-provided resources, fueled by the concern that information will be shared with regulators or company management.
  • Calculated Avoidance and Informal Coping: To manage professional risk, pilots often engage in "calculated responses" to regulatory uncertainty, such as paying for care in cash to avoid insurance documentation or relying on informal strategies like exercise, routines, or even normalized alcohol use.
  • Culture of Endurance: A professional culture emphasizing high-performance and endurance leads many pilots to continue flying despite psychological strain, effectively delaying engagement with formal support.
RQ 2. How do pilots perceive the role of conversational AI in supporting mental health?
  • Conditional Acceptance and "Bounded Roles":
    Pilots do not view AI as a replacement for human professionals. Instead, they show positive perceptions only when AI serves a "bounded role", acting as a supplemental, private resource within a trust-bounded framework.
  • Self-Reflection and Instrumental Use: Participants find value in using AI for self-assessment to objectively monitor their state (e.g., fatigue or stress) and for sensemaking to organize their thoughts and reflect on their mental well-being.
  • Non-Authoritative Assistant: Pilots prefer AI that provides data-driven feedback or statistics rather than making final decisions for them. They seek to maintain full decision-making authority while using AI as a supportive too.
62%
Reflective Use
8 out of 13 participants described
already using AI for reflective or
sensemaking purposes.
54%
Self-Assessment Value
7 out of 13 pilots specifically identified value in
AI-supported self-assessment tools.
92%
Task-Oriented Familiarity
12 out of 13 participants reported familiarity with AI for task-oriented purposes.
RQ 3. What factors shape their trust, openness, and willingness to engage with such tools?
  • Data Privacy and Governance: The most critical factor for engagement is information traceability. Trust is only established when there is a guarantee that data cannot be accessed by employers or regulatory bodies.
  • Non-Attribution and Lack of Diagnosis: Openness depends on the assurance that AI interactions will not result in a formal diagnosis or a persistent medical record that could be "used against them" in a professional context.
  • Contextual Relevance: Pilots perceive higher value when AI understands the operational demands of aviation, such as cumulative fatigue from multiple legs or poor team dynamics, and provides tailored, context-specific feedback.
77%
Distrust of Company Tools
10 out of 13 participants expressed caution or distrust toward company-linked or
employer-provided mental health resources.
62%
Fear of Data Misuse
8 out of 13 participants explicitly cited fears of data access by employers or regulators
as a primary reason for distrusting
AI-mediated mental health support.
77%
Career Risk Concerns
10 out of 13 participants emphasized that mental health disclosure is inherently tied to career-ending risks.
The Socio-Technical Framework
Our analysis synthesized findings into a layered socio-technical framework illustrating how pilots' coping practices are nested within operational demands, interpretive assessments of professional risk, and structural constraints.
Within these conditions, AI-mediated support operates in a trust-bounded and non-authoritative role, shaping only those coping practices that pilots perceive as low-risk, private, and professionally safe within regulated aviation contexts.
How this framework works
Structural & Organizational Constraints
Regulatory frameworks, aeromedical certification requirements, and institutional oversight create the outer boundary of what's professionally safe.
Interpretive Risk & Consequences
Pilots evaluate information traceability, documentation permanence, and potential career impacts when considering any mental health engagement.
Operational Demands
Day-to-day stressors, fatigue, irregular schedules, and performance expectations shape immediate coping needs.
Bounded Coping Practices
Within these constraints, pilots currently rely on informal strategies: exercise, meditation, peer support, and increasingly.
AI-mediated support operates in a trust-bounded, non-authoritative role, shaping only those coping practices pilots perceive as low-risk, private, and professionally safe within regulated aviation contexts.
Impact and Future Directions
Academic Contribution
First empirical study examining pilots' perceptions of AI-mediated mental health support, contributing to human-centered AI
and aviation psychology literature.
(Submitted for CHI 2026 Poster)
Product Design Framework
Socio-technical framework provides actionable guidance for designing AI mental health tools in safety-critical, regulated industries
beyond aviation.
Industry Application
Findings inform ongoing development of
pilot wellbeing programs and AI tool design
at major airlines and aviation organizations.
Key Takeaways: What This Means for Product Design
Trust is Structural,
Not Just Attitudinal
Pilots' hesitation around mental health tools stems from regulatory and informational consequences, not stigma alone. Product design must address governance and privacy architectures, not just user attitudes.
Bounded AI Roles
Enable Adoption
Pilots expressed conditional openness to AI when it operates as a private, self-directed resource with clear limits. Success depends on positioning AI as supportive, not authoritative.
Context Matters in
Safety-Critical Work
Design principles from consumer mental health apps don't translate directly to regulated professions. Understanding socio-technical context is essential for adoption in high-stakes environments.
This research demonstrates that successful AI-mediated mental health support in aviation depends not just on individual attitudes, but on governance, privacy, and regulatory structures that shape trust and use. By understanding how pilots negotiate mental health engagement within professional constraints, we can design AI tools that genuinely serve their needs while respecting the complex realities of safety-critical work.
Back to Top