KSI-INR-RPI—Reviewing Past Incidents
Formerly KSI-INR-02
>Control Description
>NIST 800-53 Controls
>Trust Center Components3
Ways to express your implementation of this indicator — approaches vary by organization size, complexity, and data sensitivity.
From the field: Mature implementations express continuous improvement through tracked post-incident reviews — PIR recommendations tracked as backlog items with implementation dates, improvement velocity measured over time, and playbooks updated automatically based on lessons learned. This demonstrates a learning organization, not just an incident-handling one.
Post-Incident Review Process
How lessons are captured and improvements tracked — PIR process with recommendation tracking and implementation verification
Incident Improvement Tracking
Tracking of post-incident improvements and their implementation status — evidence that recommendations lead to action
Incident Response Playbooks
Playbook summaries for common incident types — structured response procedures updated based on PIR findings
>Programmatic Queries
CLI Commands
pd incident list --status resolved --since "180 days ago" --output json | jq '.[] | {title,urgency,created_at,resolved_at}'pd analytics incident list --since "90 days ago" --output json | jq '.[] | {description,seconds_to_resolve,urgency}'>20x Assessment Focus Areas
Aligned with FedRAMP 20x Phase Two assessment methodology
Completeness & Coverage:
- •Does your incident review scope include all incident types — security events, availability incidents, near-misses, and third-party incidents that affected your environment?
- •How do you ensure pattern analysis spans a sufficient time window to identify recurring issues, not just recent incidents?
- •Are incidents from all sources analyzed — including those detected by automated tools, user reports, third-party notifications, and threat intelligence feeds?
- •How do you correlate internal incidents with industry-wide vulnerability disclosures and threat intelligence to identify broader patterns?
Automation & Validation:
- •What automated analytics or machine learning identify patterns across your incident history that human reviewers might miss?
- •How do you validate that identified patterns actually represent systemic vulnerabilities rather than coincidental similarities?
- •What automated correlation links incidents to common root causes — shared vulnerabilities, misconfigured controls, or specific attack techniques?
- •When a pattern is identified, what automated workflow ensures it becomes a tracked remediation item with accountability and deadline?
Inventory & Integration:
- •What incident data repository or SIEM supports historical incident analysis and pattern detection?
- •How do incident review findings integrate with your vulnerability management, risk register, and detection engineering processes?
- •Are incident records structured with consistent taxonomy (MITRE ATT&CK, kill chain phase, affected component) to support automated pattern analysis?
- •How do findings from incident pattern analysis feed into your threat modeling and security architecture decisions?
Continuous Evidence & Schedules:
- •How frequently are past incidents reviewed for patterns, and what evidence proves each review was completed?
- •Is incident trend data (frequency by type, severity trends, recurrence rates) available via API or dashboard?
- •What evidence demonstrates that pattern analysis findings led to implemented preventive measures?
- •How do you measure whether preventive measures from past pattern analysis actually reduced incident recurrence?
Update History
Ask AI
Configure your API key to use AI features.