Who should read this: Safety managers, investigators, accountable executives, and anyone
who uses AI-generated insights for safety decision-making.Prerequisites: Access to the Safety module. Familiarity with the basic AI Assistant
workflow covered in How to Use the Safety AI Assistant.
How ALI Generates Recommendations
ALI analyzes your workspace’s safety data using three layers:- Pattern recognition — ALI examines your historical reports, investigations, CPAs, hazard registry entries, and FRAT data to identify recurring themes, correlations, and trends.
- Regulatory knowledge — ALI cross-references findings against 14 CFR Part 5 requirements, AC 120-92D guidance, ICAO Annex 19 standards, and your organization’s documented policies.
- Contextual analysis — when opened from a specific record (report, investigation, CPA), ALI has the full context of that record including related records, contributing factors, and historical precedents.
Understanding Confidence Levels
Every AI recommendation includes a confidence indicator. This tells you how certain the AI is about its analysis.| Confidence Level | Indicator | What It Means | How to Act |
|---|---|---|---|
| High | Green badge | Strong data support. Multiple records corroborate the finding. Clear regulatory alignment. | Consider implementing directly after brief review. |
| Medium | Yellow badge | Moderate data support. Some records support the finding but the pattern is not definitive. | Investigate further before acting. Validate against additional data. |
| Low | Red badge | Limited data. The AI identified a possible pattern but the evidence is thin or the analysis is novel. | Treat as a hypothesis. Gather more data before making decisions. |
What influences confidence
- Data volume — more reports and records in your workspace produce higher-confidence analysis.
- Data recency — recent data is weighted more heavily. Patterns from the last 90 days receive higher confidence than those from 2+ years ago.
- Pattern consistency — if multiple independent data sources point to the same conclusion, confidence increases.
- Regulatory clarity — recommendations tied to specific regulatory requirements (e.g., “this investigation is required by 14 CFR 5.55”) have inherently high confidence.
When to Trust AI Recommendations
AI recommendations are most reliable in these scenarios:Pattern identification
ALI excels at finding patterns across hundreds of reports that would take hours to identify
manually. Trust pattern analysis when it cites specific source records.
Regulatory cross-referencing
ALI accurately maps findings to specific CFR sections and advisory circulars. The regulatory
knowledge base is comprehensive and current.
Trend analysis
Multi-month trend analysis across report types, categories, and severity levels is a strength.
ALI can identify gradual shifts that are invisible in day-to-day operations.
Drafting assistance
AI-drafted summaries, investigation narratives, and CPA language provide a solid starting point
that saves significant time.
When to Override AI Recommendations
Exercise independent judgment and override AI suggestions in these situations:Context the AI cannot access
ALI does not have access to:- Verbal conversations — crew debriefs, phone calls, or hallway conversations that provide context.
- Physical environment — weather conditions at the time of an event beyond what is recorded in the system.
- Organizational politics — interpersonal dynamics, union considerations, or management context.
- External intelligence — information from other operators, OEM service bulletins not yet entered in the system, or FAA guidance issued after the AI’s last update.
Regulatory edge cases
While ALI’s regulatory knowledge is comprehensive, edge cases exist where the correct interpretation depends on your organization’s specific operations specifications, exemptions, or LODAs (Letters of Deviation Authority). Always verify regulatory recommendations against your ops specs.Novel situations
If the AI has not seen a similar event in your workspace’s history, its confidence will be low and its recommendations may not account for unique aspects of the situation. Novel events require experienced human judgment.The Feedback Loop
ALI improves based on how you interact with its recommendations. This feedback loop is how the AI learns your organization’s patterns and preferences.How to provide feedback
When ALI presents a recommendation or analysis:If you act on the recommendation, click Accept or implement it through the suggested action cards. If you modify the recommendation before implementing, the AI notes the modification.
How feedback improves the AI
- Accepted recommendations reinforce the patterns and reasoning the AI used.
- Dismissed recommendations teach the AI to adjust its thresholds and avoid similar false positives.
- Modified recommendations help the AI understand the gap between its initial analysis and the correct conclusion.
- Comments provide rich context that helps the AI understand organizational nuances.
Feedback effects are workspace-scoped. Your feedback improves the AI for your organization only —
it does not affect other workspaces. This ensures the AI adapts to your specific operational
context, fleet composition, and safety culture.
AI-Powered Features Across the Safety Module
ALI powers several features beyond the chat interface:| Feature | Where | What It Does |
|---|---|---|
| Natural Language Reporting | Safety > Reports > New Report | Converts plain-text event descriptions into structured report fields. |
| Report Categorization | Automatic on report submission | Suggests report type, category, and severity based on the description. |
| Investigation Insights | Investigation detail page > AI Assist | Identifies contributing factors and suggests lines of inquiry based on similar past investigations. |
| CPA Suggestions | After investigation findings are entered | Recommends corrective/preventive actions based on the root cause analysis. |
| Hazard Identification | Hazard Registry > AI Analysis | Identifies potential new hazards from patterns across recent reports. |
| SPI Anomaly Detection | Safety > SPIs > Anomaly Alerts | Flags Safety Performance Indicators that deviate from expected trends. |
| SmartScore Factor Analysis | SmartScore pilot detail page | Explains which factors are most impacting a pilot’s score and suggests improvement actions. |
| Compliance Gap Identification | Safety > Compliance | Identifies areas where your SMS documentation or activities may not fully meet 14 CFR Part 5 elements. |
Best Practices for Working with AI
Document your AI-assisted decisions
Document your AI-assisted decisions
When AI analysis informs a safety decision, note it in the record. For example: “Contributing
factor analysis assisted by ALI; verified against source records RPT-2026-00123 and
RPT-2026-00145.” This creates an audit trail and demonstrates due diligence under 14 CFR 5.71.
Cross-reference AI citations
Cross-reference AI citations
Always click through to the source records cited by ALI. Verify that the AI’s interpretation of
the data matches the original record. Citation verification takes seconds and prevents acting on
misinterpreted data.
Use contextual mode for deeper analysis
Use contextual mode for deeper analysis
Opening ALI from within a specific record (report, investigation, CPA) provides much richer
analysis than asking about a record in the general chat. The contextual mode gives the AI access
to the full record and its relationships.
Iterate on your prompts
Iterate on your prompts
If the first response is too broad, narrow your question. Instead of “What should we do about
maintenance issues?”, try “What patterns exist in maintenance-related safety reports for our
Citation X fleet in the last 6 months?” Specificity produces better results.
Provide feedback consistently
Provide feedback consistently
The more feedback you provide, the faster the AI adapts to your organization. Make it a habit to
accept or dismiss recommendations with a reason, especially early in your adoption of the AI
features.
Data Privacy and AI
- ALI only accesses data within your workspace. It cannot see data from other organizations.
- AI analysis runs on PlaneConnection’s infrastructure — your safety data is not sent to third-party AI services.
- De-identified data patterns may be used to improve the overall model, but identifiable records are never shared.
- See Manage Privacy for your organization’s data governance controls.
Related
Use the Safety AI Assistant
Basic guide to opening ALI and asking safety questions.
AI Safety Features Reference
Technical reference for all AI-powered safety features.
Submit a Natural Language Report
Use AI to convert plain-text descriptions into structured reports.
Manage Investigations
Using AI Assist during the investigation workflow.