Skip to main content
This guide explains how the Safety AI Assistant (ALI) generates its recommendations, how to interpret confidence levels, when to follow or override AI suggestions, and how your feedback trains the system to produce better results over time.
Who should read this: Safety managers, investigators, accountable executives, and anyone who uses AI-generated insights for safety decision-making.Prerequisites: Access to the Safety module. Familiarity with the basic AI Assistant workflow covered in How to Use the Safety AI Assistant.
AI suggestions are advisory only. Per 14 CFR Part 5, all safety assessments, investigation findings, and risk evaluations require independent human evaluation by qualified personnel. AI recommendations should be used as one factor among many in decision-making, not as the sole basis for any safety, operational, or maintenance action. See Legal Notices.

How ALI Generates Recommendations

ALI analyzes your workspace’s safety data using three layers:
  1. Pattern recognition — ALI examines your historical reports, investigations, CPAs, hazard registry entries, and FRAT data to identify recurring themes, correlations, and trends.
  2. Regulatory knowledge — ALI cross-references findings against 14 CFR Part 5 requirements, AC 120-92D guidance, ICAO Annex 19 standards, and your organization’s documented policies.
  3. Contextual analysis — when opened from a specific record (report, investigation, CPA), ALI has the full context of that record including related records, contributing factors, and historical precedents.
The result is a recommendation grounded in your actual data and regulatory standards, not generic advice.

Understanding Confidence Levels

Every AI recommendation includes a confidence indicator. This tells you how certain the AI is about its analysis.
Confidence LevelIndicatorWhat It MeansHow to Act
HighGreen badgeStrong data support. Multiple records corroborate the finding. Clear regulatory alignment.Consider implementing directly after brief review.
MediumYellow badgeModerate data support. Some records support the finding but the pattern is not definitive.Investigate further before acting. Validate against additional data.
LowRed badgeLimited data. The AI identified a possible pattern but the evidence is thin or the analysis is novel.Treat as a hypothesis. Gather more data before making decisions.

What influences confidence

  • Data volume — more reports and records in your workspace produce higher-confidence analysis.
  • Data recency — recent data is weighted more heavily. Patterns from the last 90 days receive higher confidence than those from 2+ years ago.
  • Pattern consistency — if multiple independent data sources point to the same conclusion, confidence increases.
  • Regulatory clarity — recommendations tied to specific regulatory requirements (e.g., “this investigation is required by 14 CFR 5.55”) have inherently high confidence.

When to Trust AI Recommendations

AI recommendations are most reliable in these scenarios:

Pattern identification

ALI excels at finding patterns across hundreds of reports that would take hours to identify manually. Trust pattern analysis when it cites specific source records.

Regulatory cross-referencing

ALI accurately maps findings to specific CFR sections and advisory circulars. The regulatory knowledge base is comprehensive and current.

Trend analysis

Multi-month trend analysis across report types, categories, and severity levels is a strength. ALI can identify gradual shifts that are invisible in day-to-day operations.

Drafting assistance

AI-drafted summaries, investigation narratives, and CPA language provide a solid starting point that saves significant time.

When to Override AI Recommendations

Exercise independent judgment and override AI suggestions in these situations:

Context the AI cannot access

ALI does not have access to:
  • Verbal conversations — crew debriefs, phone calls, or hallway conversations that provide context.
  • Physical environment — weather conditions at the time of an event beyond what is recorded in the system.
  • Organizational politics — interpersonal dynamics, union considerations, or management context.
  • External intelligence — information from other operators, OEM service bulletins not yet entered in the system, or FAA guidance issued after the AI’s last update.

Regulatory edge cases

While ALI’s regulatory knowledge is comprehensive, edge cases exist where the correct interpretation depends on your organization’s specific operations specifications, exemptions, or LODAs (Letters of Deviation Authority). Always verify regulatory recommendations against your ops specs.

Novel situations

If the AI has not seen a similar event in your workspace’s history, its confidence will be low and its recommendations may not account for unique aspects of the situation. Novel events require experienced human judgment.
A good rule of thumb: use AI recommendations as your starting point, then apply the “so what?” test. Ask yourself whether the recommendation makes sense given everything you know about the situation, including context the AI cannot access. If something feels off, investigate further before acting.

The Feedback Loop

ALI improves based on how you interact with its recommendations. This feedback loop is how the AI learns your organization’s patterns and preferences.

How to provide feedback

When ALI presents a recommendation or analysis:
1
Review the recommendation
2
Read the full analysis, including cited source records and confidence level.
3
Accept or modify
4
If you act on the recommendation, click Accept or implement it through the suggested action cards. If you modify the recommendation before implementing, the AI notes the modification.
5
Dismiss with reason
6
If you choose not to follow a recommendation, click Dismiss and select a reason:
7
ReasonWhat it teaches the AINot applicableThe pattern exists but is not relevant to this context.Already addressedThe issue was resolved through other means the AI was not aware of.Insufficient evidenceThe AI needs more data before this type of recommendation.Incorrect analysisThe AI misinterpreted the data. Most valuable feedback.
8
Add a comment (optional)
9
Free-text comments provide the most detailed feedback. Explain why you disagree or what context was missing.

How feedback improves the AI

  • Accepted recommendations reinforce the patterns and reasoning the AI used.
  • Dismissed recommendations teach the AI to adjust its thresholds and avoid similar false positives.
  • Modified recommendations help the AI understand the gap between its initial analysis and the correct conclusion.
  • Comments provide rich context that helps the AI understand organizational nuances.
Feedback effects are workspace-scoped. Your feedback improves the AI for your organization only — it does not affect other workspaces. This ensures the AI adapts to your specific operational context, fleet composition, and safety culture.

AI-Powered Features Across the Safety Module

ALI powers several features beyond the chat interface:
FeatureWhereWhat It Does
Natural Language ReportingSafety > Reports > New ReportConverts plain-text event descriptions into structured report fields.
Report CategorizationAutomatic on report submissionSuggests report type, category, and severity based on the description.
Investigation InsightsInvestigation detail page > AI AssistIdentifies contributing factors and suggests lines of inquiry based on similar past investigations.
CPA SuggestionsAfter investigation findings are enteredRecommends corrective/preventive actions based on the root cause analysis.
Hazard IdentificationHazard Registry > AI AnalysisIdentifies potential new hazards from patterns across recent reports.
SPI Anomaly DetectionSafety > SPIs > Anomaly AlertsFlags Safety Performance Indicators that deviate from expected trends.
SmartScore Factor AnalysisSmartScore pilot detail pageExplains which factors are most impacting a pilot’s score and suggests improvement actions.
Compliance Gap IdentificationSafety > ComplianceIdentifies areas where your SMS documentation or activities may not fully meet 14 CFR Part 5 elements.

Best Practices for Working with AI

When AI analysis informs a safety decision, note it in the record. For example: “Contributing factor analysis assisted by ALI; verified against source records RPT-2026-00123 and RPT-2026-00145.” This creates an audit trail and demonstrates due diligence under 14 CFR 5.71.
Always click through to the source records cited by ALI. Verify that the AI’s interpretation of the data matches the original record. Citation verification takes seconds and prevents acting on misinterpreted data.
Opening ALI from within a specific record (report, investigation, CPA) provides much richer analysis than asking about a record in the general chat. The contextual mode gives the AI access to the full record and its relationships.
If the first response is too broad, narrow your question. Instead of “What should we do about maintenance issues?”, try “What patterns exist in maintenance-related safety reports for our Citation X fleet in the last 6 months?” Specificity produces better results.
The more feedback you provide, the faster the AI adapts to your organization. Make it a habit to accept or dismiss recommendations with a reason, especially early in your adoption of the AI features.

Data Privacy and AI

  • ALI only accesses data within your workspace. It cannot see data from other organizations.
  • AI analysis runs on PlaneConnection’s infrastructure — your safety data is not sent to third-party AI services.
  • De-identified data patterns may be used to improve the overall model, but identifiable records are never shared.
  • See Manage Privacy for your organization’s data governance controls.

Use the Safety AI Assistant

Basic guide to opening ALI and asking safety questions.

AI Safety Features Reference

Technical reference for all AI-powered safety features.

Submit a Natural Language Report

Use AI to convert plain-text descriptions into structured reports.

Manage Investigations

Using AI Assist during the investigation workflow.
Last modified on April 11, 2026