KSI-SVC-EIS—Evaluating and Improving Security
Formerly KSI-SVC-01
>Control Description
>NIST 800-53 Controls
>Trust Center Components4
Ways to express your implementation of this indicator — approaches vary by organization size, complexity, and data sensitivity.
From the field: Mature implementations express network isolation through policy-enforced segmentation — zero trust architecture with identity-aware access, micro-segmentation rules verified by firewall APIs, and network security monitoring dashboards showing east-west traffic patterns. Defense-in-depth is demonstrated through multiple automated isolation layers.
Network Security Architecture
Architecture expressing network segmentation, firewall rules, and security zones — shows defense-in-depth through isolation layers
Network Security Monitoring
Dashboard expressing network security posture — IDS/IPS alerts, traffic anomalies, and segmentation enforcement status
Network Segmentation Enforcement
Automated enforcement of network segmentation policies — micro-segmentation rules preventing unauthorized lateral movement
Zero Trust Architecture Documentation
Zero trust implementation including micro-segmentation and identity-aware access
>Programmatic Queries
CLI Commands
snyk test --all-projects --severity-threshold=mediumsnyk container test <image>:<tag> --severity-threshold=high>20x Assessment Focus Areas
Aligned with FedRAMP 20x Phase Two assessment methodology
Completeness & Coverage:
- •Does your security evaluation and improvement process cover all information resource categories — infrastructure, applications, data stores, identity systems, and network components?
- •How do you ensure improvement evaluations consider all security dimensions — hardening, patching, architecture, access controls, encryption, and monitoring?
- •Are improvements prioritized using risk-based criteria that consider both likelihood and impact, not just severity scores?
- •How do you identify improvement opportunities proactively (benchmarking, threat intelligence, industry best practices) rather than only in response to findings?
Automation & Validation:
- •What automated evaluation tools (CSPM, CWPP, benchmark scanners) continuously identify security improvement opportunities?
- •How do you validate that implemented improvements actually improved security posture — through before/after measurement, re-scanning, or testing?
- •What happens when an improvement opportunity is identified but deprioritized — how do you track the accepted risk and re-evaluate periodically?
- •How do you detect regression — previously implemented improvements that are undone by subsequent changes?
Inventory & Integration:
- •What tools and processes compose your continuous security evaluation pipeline?
- •How do security improvement findings integrate with your backlog management and sprint planning to ensure they are resourced?
- •Are security improvement opportunities tracked in the same system as vulnerability findings and compliance gaps, or in a separate process?
- •How do evaluation results from different tools (CSPM, vulnerability scanners, penetration tests, audits) aggregate into a unified improvement roadmap?
Continuous Evidence & Schedules:
- •How do you demonstrate that security evaluation and improvement is persistent rather than episodic?
- •Is security posture trending data (improvement counts, risk score changes, benchmark conformance) available via API or dashboard?
- •What evidence shows that security improvements implemented over the past year have measurably improved your posture?
- •How do you prove that the evaluation cadence is maintained and that identified improvements are implemented within defined timelines?
Update History
Ask AI
Configure your API key to use AI features.