KSI-PIY-RSD—Reviewing Security in the SDLC
Formerly KSI-PIY-04
>Control Description
>NIST 800-53 Controls
>Trust Center Components3
Ways to express your implementation of this indicator — approaches vary by organization size, complexity, and data sensitivity.
From the field: Mature implementations express data minimization through automated enforcement — DLP policies preventing excessive data collection, retention rules auto-expiring data beyond defined periods, and minimization metrics showing data collection is intentional and bounded rather than aspirational.
De-identification and Anonymization
De-identification capabilities as a product feature — techniques used for analytics and non-production environments
Purpose Limitation Documentation
Data processing purposes with legal basis for each — expressing intentional, bounded data collection
Data Minimization Policy
Human-readable data minimization principles — documents intent behind automated retention and collection controls
>Programmatic Queries
CLI Commands
gh api repos/{owner}/{repo}/branches/main/protection --jq '{reviews: .required_pull_request_reviews.required_approving_review_count, checks: .required_status_checks.contexts, admins: .enforce_admins.enabled}'gh run list --limit 10 --json name,status,conclusion,event>20x Assessment Focus Areas
Aligned with FedRAMP 20x Phase Two assessment methodology
Completeness & Coverage:
- •Does your secure SDLC cover all phases — design (threat modeling), development (SAST, SCA), testing (DAST, penetration testing), deployment (IaC scanning), and maintenance (dependency updates)?
- •Are all software components subject to the secure SDLC — including internal services, customer-facing applications, infrastructure-as-code, and configuration code?
- •How do you ensure CISA Secure By Design principles are applied — including memory-safe languages, secure defaults, eliminating default passwords, and reducing attack surface?
- •Are third-party or open-source components incorporated into the CSO subject to the same SDLC security requirements as internally developed code?
Automation & Validation:
- •What automated security gates exist in the CI/CD pipeline (SAST, DAST, SCA, secret scanning, container scanning), and do they block merges or deployments on failure?
- •How do you measure whether secure SDLC practices are effective — do you track finding rates by category (injection, XSS, auth issues) trending over time?
- •What happens when a security gate fails — is the developer provided with actionable guidance, and how quickly must the finding be resolved before the pipeline unblocks?
- •How do you validate that threat modeling is conducted for new features and that identified risks are addressed before release?
Inventory & Integration:
- •What SAST, DAST, SCA, and secret scanning tools are integrated into your CI/CD pipeline, and how do findings aggregate into a single view?
- •How does your secure SDLC integrate with your vulnerability management system to track findings from code to remediation?
- •Are SDLC security policies (required scans, approval gates, finding severity thresholds) defined as pipeline-as-code and version-controlled?
- •How do you track which repositories and services have full SDLC security coverage versus partial or no coverage?
Continuous Evidence & Schedules:
- •How do you demonstrate that every production release in the past 90 days passed through the secure SDLC pipeline with all security gates?
- •Is SDLC security data (scan results, gate pass/fail, finding counts, remediation timelines) available via API or dashboard?
- •How frequently is the secure SDLC reviewed for effectiveness, and what evidence shows improvements based on those reviews?
- •What evidence demonstrates alignment with CISA Secure By Design principles in recent releases?
Update History
Ask AI
Configure your API key to use AI features.