Under active development Content is continuously updated and improved

KSI-IAM-SUSResponding to Suspicious Activity

LOW
MODERATE

Formerly KSI-IAM-06

>Control Description

Automatically disable or otherwise secure accounts with privileged access in response to suspicious activity.
Defined terms:
Vulnerability Response

>NIST 800-53 Controls

>Trust Center Components
3

Ways to express your implementation of this indicator — approaches vary by organization size, complexity, and data sensitivity.

From the field: Mature implementations express separation of duties through automated enforcement — IAM platforms detecting SoD conflicts during role assignment, policy engines preventing incompatible role combinations, and conflict detection metrics tracked as dashboard indicators. Role conflicts are prevented by design through technical controls, not just policy.

Role Conflict Detection

Product Security Features

Automated SoD conflict detection and enforcement — IAM platform prevents incompatible role assignments in real time

Automated: IAM platform detects SoD violations in real-time during role assignment

Separation of Duties Matrix

Documents & Reports

SoD matrix expressing incompatible roles and how conflicts are prevented — reference for automated enforcement rules

SoD Compliance Reports

Evidence Artifacts

SoD compliance reports showing violation status and remediation — generated from IAM platform enforcement data

>Programmatic Queries

Beta
Security

CLI Commands

Search for failed login attempts in last 24h
splunk search 'index=main sourcetype=access_combined action=failure | stats count by src_ip user | sort -count | head 20' -earliest -24h
Detect brute force patterns
splunk search 'index=main action=failure | stats count by user src_ip | where count > 10 | sort -count' -earliest -1h

>20x Assessment Focus Areas

Aligned with FedRAMP 20x Phase Two assessment methodology

Completeness & Coverage:

  • Does automated suspicious activity detection and response cover all privileged account types — cloud admin accounts, database admins, CI/CD pipeline accounts, and root/break-glass accounts?
  • What suspicious activity indicators trigger automated account actions — impossible travel, unusual API calls, off-hours access, failed MFA attempts, privilege escalation patterns?
  • How do you ensure detection covers privileged activity across all systems, not just the primary identity provider?
  • Are there privileged accounts excluded from automated suspension (e.g., break-glass accounts), and what compensating controls apply to those?

Automation & Validation:

  • What is the maximum time between detection of suspicious privileged activity and automatic account disablement or restriction?
  • How do you prevent false positives from disrupting legitimate administrative work — what tuning and safeguards are in place?
  • What happens if the automated response system itself is compromised or disabled by an attacker — what secondary detection exists?
  • How do you test automated suspicious activity response — do you run simulated attacks or adversary emulation against privileged accounts?

Inventory & Integration:

  • What behavioral analytics or UEBA platform detects suspicious privileged activity, and how does it integrate with your IdP to disable accounts?
  • How do automated account actions integrate with your incident response workflow to ensure human investigation follows automated containment?
  • What tools monitor privileged session activity (session recording, command logging) to provide context for suspicious activity alerts?
  • How does the account restoration process integrate with your ticketing system to ensure investigation is completed before access is restored?

Continuous Evidence & Schedules:

  • What evidence shows the automated detection and response system is operational and has been effective over the past 90 days?
  • Are detection rules and response actions auditable — can assessors review the criteria, thresholds, and recent trigger events via API?
  • How do you demonstrate that false positive and false negative rates are tracked and that detection rules are tuned over time?
  • What evidence shows that every automated account action resulted in proper investigation and documented resolution?

Update History

2026-02-04Removed italics and changed the ID as part of new standardization in v0.9.0-beta; no material changes.

Ask AI

Configure your API key to use AI features.