Why Safety Culture Should Inspire Every Technology Leader

In aviation, safety isn’t a checklist—it’s a mindset. Recently, I completed cabin crew training through a “Open Skies” program, and one insight stood out: 95% of the training is safety, only 5% is service.

That ratio tells a powerful story. Safety is embedded in every action, every decision, every interaction. It’s not just about compliance—it’s about creating an environment where people feel secure, supported, and empowered to act.

But here’s what surprised me: this isn’t just about physical safety—emergency procedures, evacuations, and firefighting. It’s about psychological safety, systemic thinking, and a culture where every person, regardless of rank, can stop an operation if they see a problem. And as I sat through day after day of drills, simulations, and scenarios, I couldn’t stop thinking: This is exactly what the AI industry needs.

The Wake-Up Call: When 5% Isn’t Enough

Let me put that 95/5 ratio in context. In a typical four-week cabin crew training program, approximately 150 hours are dedicated to safety competencies:

  • Emergency equipment operation
  • Evacuation procedures (land and water)
  • Fire fighting and smoke management
  • First aid and medical emergencies
  • Security and threat response
  • Crew Resource Management
  • Human factors and fatigue management

Meanwhile, about 8 hours cover service standards, meal procedures, and customer interaction techniques.

This isn’t an accident. This ratio emerged from decades of painful lessons. Every safety protocol in that manual exists because somewhere, at some point, something went catastrophically wrong. The industry looked at the wreckage—literal and metaphorical—and said: “Never again.”

The Foundation of Psychological Safety

What struck me most was the emphasis on psychological safety: learning from mistakes without blame, fostering trust, and encouraging open communication. These principles aren’t just for the flight deck—they’re essential for any team, in any industry.

During one training session, we practiced a simulated emergency evacuation. I made a mistake—I hesitated at a critical moment. In many work environments, that mistake would be minimized, ignored, or worse, used against you in a performance review.

Instead, the instructor paused the simulation. “Let’s talk about what just happened,” she said. Not “what you did wrong,” but “what happened.” We analyzed the moment: What information did I have? What was I thinking? What could we learn?

This wasn’t just kindness—it was strategic. Psychological safety isn’t soft—it’s the foundation of operational safety. When people fear blame, they hide mistakes. Hidden mistakes become catastrophic failures.

The AI Industry’s Safety Paradox

In the world of AI and technology, this becomes even more critical. When we deploy AI systems, we’re not just shipping code—we’re affecting lives, businesses, and society. The same mindset that keeps passengers safe at 35,000 feet should guide how we build and deploy technology.

Yet consider our current reality:

Aviation: 95% safety, 5% service
AI Development: What’s our ratio?

If I’m honest, for many AI products, it looks more like:

  • 60% feature development
  • 20% performance optimization
  • 10% user acquisition and metrics
  • 5% documentation
  • 5% safety, ethics, and bias testing

We’ve inverted the ratio. And we’re paying for it.

Recent AI Failures: The Cost of Inadequate Safety Culture

Consider these real-world examples:

Amazon’s Hiring Algorithm (2018): An AI recruiting tool showed bias against women because it was trained on historical hiring data—data that reflected existing gender imbalances. The system penalized resumes that included the word “women’s” (as in “women’s chess club captain”). Amazon scrapped the tool, but only after the damage was done.

Healthcare AI Bias (2019): A study published in Science revealed that a widely-used healthcare algorithm exhibited significant racial bias, affecting millions of patients. The algorithm was less likely to refer Black patients for additional care compared to equally sick white patients. Why? It used healthcare costs as a proxy for health needs, and historical spending patterns reflected systemic inequalities in access to care.

Uber’s Fatal Autonomous Vehicle Crash (2018): An Uber self-driving car struck and killed a pedestrian in Arizona. Investigation revealed multiple safety culture failures: disabled emergency braking, inadequate driver monitoring, and insufficient safety testing protocols.

Each of these failures stemmed not from technical impossibility, but from inadequate safety culture. The tools to detect these issues existed. What was missing was a culture that prioritized finding and fixing problems over shipping fast.

Safety Enables Innovation

As leaders, we often talk about innovation and transformation. But here’s the paradox: real progress starts with safety—physical and psychological. When people feel safe, they take risks, share ideas, and drive change. Without that foundation, innovation stalls.

This seems counterintuitive. “Move fast and break things” has been a Silicon Valley mantra. Safety sounds like bureaucracy, like slowness, like the enemy of innovation.

But aviation proves this wrong. Commercial aviation is one of the most innovative industries in the world—from fly-by-wire systems to advanced materials to AI-assisted flight planning. And it’s also the safest form of transportation ever invented. These facts aren’t in spite of each other—they’re because of each other.

Safety enables innovation by:

  1. Building Trust: When stakeholders trust your safety culture, they grant permission to innovate in high-stakes domains
  2. Reducing Rework: Catching issues early prevents costly failures and rebuilds
  3. Attracting Talent: The best people want to work where their concerns are taken seriously
  4. Enabling Bold Experiments: Teams that feel psychologically safe attempt more ambitious projects
  5. Creating Competitive Moats: Safety-first companies build lasting advantages as regulations inevitably tighten

Consider how this applies to AI development:

  • Physical Safety: Ensuring AI systems don’t cause harm, whether autonomous vehicles or medical diagnosis tools
  • Data Safety: Protecting privacy and security in an era of unprecedented data collection
  • Economic Safety: Preventing AI-driven market manipulation or automated decision-making that destroys livelihoods
  • Social Safety: Mitigating algorithmic bias, discrimination, and amplification of harmful content
  • Psychological Safety: Creating teams where engineers can raise concerns about algorithmic bias, ethical issues, or potential failures without fear of repercussion

The AI Connection: Learning from Incidents

Aviation’s safety culture emerged from decades of learning from incidents and near-misses. Every accident investigation contributes to a shared knowledge base that makes flying safer. The Aviation Safety Reporting System (ASRS) receives over 60,000 reports annually—most describing incidents that didn’t result in accidents, but could have.

This system works because:

  • Reporting is confidential and protected from punishment
  • Data is shared across the entire industry
  • Analysis focuses on systemic issues, not individual blame
  • Recommendations are implemented industry-wide

The AI industry needs a similar approach. When an AI system fails—whether it’s a biased hiring algorithm, a misdiagnosis, or an autonomous vehicle accident—the question shouldn’t be “who’s to blame?” but “what can we learn?”

Imagine an AI Safety Reporting System where:

  • Data scientists could anonymously report discovered biases
  • Failed deployments could be analyzed for systemic lessons
  • Near-misses (“we almost shipped this biased model”) could be shared
  • Industry-wide patterns could inform better practices

This isn’t fantasy. Organizations like the Partnership on AI, AI Incident Database, and various research institutions are building exactly this infrastructure. The question is: will AI companies participate, or will we wait for regulation to force it?

What Technology Leaders Can Learn

This experience reminded me that technology leadership isn’t only about systems and platforms—it’s about people. Creating environments where teams feel safe to experiment, speak up, and learn is the cornerstone of success.

Here are specific practices I’m bringing from aviation into AI leadership:

1. Pre-Deployment Safety Briefs

Before every flight, crew conduct a safety briefing covering emergency procedures, special circumstances, and each person’s role. It’s not bureaucratic—it’s essential. Everyone knows their responsibilities, potential risks, and how to communicate if something goes wrong.

Before every major AI deployment, teams should conduct similar briefings:

  • What could go wrong with this system?
  • What populations might be affected differently?
  • Who’s responsible for monitoring what metrics?
  • What triggers an investigation or rollback?
  • How do we shut it down if needed?
  • What’s our escalation path if we discover issues?

Make these structured conversations, not just quick Slack messages. Document the discussion. Review it during post-deployment analysis.

2. The “Sterile Cockpit” Principle

During critical phases of flight—takeoff and landing—aviation enforces the “sterile cockpit” rule: non-essential communication is prohibited. The crew focuses entirely on the critical task at hand. No chitchat, no meal service discussions, no distractions.

This seems extreme until you realize: these are the moments when accidents happen. Distraction during critical operations is dangerous.

Apply this to AI development: During critical testing and deployment phases, minimize distractions:

  • Bias testing week: No new feature requests, no context switching
  • Ethics review: Deep focus time, not squeezed between meetings
  • Deployment day: Dedicated monitoring, not multitasking
  • Incident response: Full attention, other work can wait

Create protected time for safety-critical work. This isn’t about productivity theater—it’s about ensuring people can think deeply about complex safety issues.

3. Empowered to Say “Stop”

In aviation, anyone—from the newest flight attendant to the captain—can stop an operation if they see a safety issue. This isn’t just policy; it’s deeply embedded in culture and protected by regulation.

Example: A ground crew member notices something unusual during pre-flight inspection. They report it to the captain. The captain is under pressure—the flight is already delayed, passengers are boarding, the airline will face penalties. But the ground crew member insists. The captain has two choices:

  • Override the concern and risk catastrophe
  • Delay the flight for investigation

In a strong safety culture, there’s only one choice. The flight is delayed. Investigation happens. Sometimes it’s nothing. Sometimes it prevents disaster. Always, the ground crew member is thanked, not punished.

AI teams need the same empowerment:

  • Any team member should be able to halt a deployment without manager approval
  • Ethics reviewers should have genuine authority, not just advisory roles
  • Test engineers should be able to say “not ready” without career consequences
  • Users should have direct channels to report AI harms to safety teams

This requires more than policy—it requires:

  • Legal protection (like whistleblower statutes)
  • Cultural reinforcement (celebrating people who raise concerns)
  • Leadership modeling (executives publicly supporting safety over speed)
  • Structural authority (ethics review can actually stop projects)

4. Recurrent Training and Culture Reinforcement

Pilots and cabin crew undergo recurrent safety training every year. Not because they’ve forgotten the basics, but because safety culture requires constant reinforcement. Skills degrade without practice. Complacency creeps in. New scenarios emerge.

Recurrent training serves multiple purposes:

  • Skills maintenance (practicing emergency procedures)
  • Knowledge updates (new procedures, new aircraft)
  • Culture reinforcement (safety first, every time)
  • Team building (shared experience of training together)

AI teams should have regular ethics and safety training, not just at onboarding:

  • Annual ethics training: New case studies, emerging issues, refreshed frameworks
  • Quarterly safety reviews: What incidents occurred? What did we learn? What will we change?
  • Regular bias testing workshops: New tools, new techniques, new understanding
  • Ongoing CRM training: Team dynamics, communication, decision-making under pressure

Make these interactive, scenario-based, and relevant to current work—not compliance checkboxes.

5. Learning from Near-Misses, Not Just Disasters

Aviation has a crucial insight: learn from near-misses before they become disasters. The Aviation Safety Reporting System (ASRS) collects thousands of reports annually describing situations that almost went wrong but didn’t.

This works because:

  • Reporting is voluntary and confidential
  • Reporters receive immunity from punishment
  • Data is analyzed for systemic patterns
  • Recommendations are shared industry-wide

AI needs equivalent systems:

  • Near-miss reporting: “We almost shipped a biased model but caught it in final testing”
  • Bias discoveries: “We found this unexpected pattern in our data”
  • Close calls: “User feedback almost didn’t reach us in time”
  • Workaround documentation: “People are working around our system in concerning ways”

Create protected channels for this reporting. Analyze for patterns. Share learnings. Celebrate the catches, not just the launches.

A Challenge to Leaders

My challenge to fellow leaders: Look beyond your domain. Step into frontline roles. Experience the culture that keeps businesses running. It will change how you lead, and how you think about risk, resilience, and trust.

For those of us in AI and technology, this might mean:

  • Spending a day with customer support to understand how users interact with your AI systems and where they struggle
  • Joining QA teams to see how they test for edge cases and what concerns they raise that get dismissed
  • Sitting with data labelers to understand the human element in “machine” learning and the subjective judgments embedded in “objective” data
  • Participating in incident response to see how teams handle failures and what information they wish they had earlier
  • Shadowing users in their real environments to see how your AI actually gets used (versus how you think it gets used)

When I donned the cabin crew uniform and went through every drill, every scenario, every emergency procedure, I gained something no amount of reading or presentations could provide: embodied understanding. I felt the weight of responsibility. I experienced the culture. I understood viscerally why safety comes first.

Every AI leader should have a similar experience in their domain.

Question for reflection: What’s one thing you’ve done to truly understand the frontline of your business? How did it change your perspective on technology and AI deployment? And if you haven’t done it yet—what’s stopping you?


This is part 1 of a series on Safety Culture and Technology Leadership. Next post: “Walking the Walk: What Cabin Crew Training Taught Me About AI Ethics”

By:


Leave a comment