KSI-CMT-VTD—Validating Throughout Deployment
Formerly KSI-CMT-03
>Control Description
>NIST 800-53 Controls
>Trust Center Components3
Ways to express your implementation of this indicator — approaches vary by organization size, complexity, and data sensitivity.
From the field: Mature implementations express testing rigor through pipeline metrics — unit, integration, security, and acceptance test pass rates published as dashboard indicators. Test gates are enforced in CI/CD pipelines, with coverage thresholds blocking deployment when testing falls below acceptable levels.
Test Coverage Reports
Automated testing coverage reports expressing validation rigor — generated from CI/CD pipelines with pass rates and coverage metrics
Pre-Production Environment Architecture
Architecture expressing staging/pre-prod environments used for change validation — shows environment parity with production
Testing and Validation Framework
How changes are validated before production deployment — testing pyramid, security scan requirements, and acceptance criteria
>Programmatic Queries
CLI Commands
gh run list --json name,status,conclusion,createdAt --limit 20gh api repos/{owner}/{repo}/commits/<sha>/check-suites --jq '.check_suites[] | {app: .app.name, status: .status, conclusion: .conclusion}'gh run view <run-id> --json jobs --jq '.jobs[] | {name,status,conclusion,startedAt,completedAt}'>20x Assessment Focus Areas
Aligned with FedRAMP 20x Phase Two assessment methodology
Completeness & Coverage:
- •Does automated testing cover all deployment stages — build, integration, staging, canary, and production — or are some stages validated only manually?
- •How do you ensure test coverage includes security validation (SAST, DAST, dependency scanning) in addition to functional and performance testing?
- •Are infrastructure changes (Terraform, CloudFormation) subject to the same automated validation pipeline as application code changes?
- •When a new type of deployment artifact is introduced (e.g., a new microservice, a new cloud resource type), how do you ensure validation tests are created before the first deployment?
Automation & Validation:
- •What happens when an automated validation gate fails — is the deployment blocked, rolled back, or only flagged, and what evidence shows the gate is enforced?
- •How do you detect if automated tests themselves are broken, flaky, or silently passing when they should fail?
- •What automated rollback or circuit-breaker mechanism activates when post-deployment validation detects a regression in production?
- •How do you validate that security-specific tests (e.g., scanning for exposed secrets, misconfigured permissions) actually catch real issues — do you run red-team or mutation testing against the pipeline?
Inventory & Integration:
- •What CI/CD platforms (GitHub Actions, GitLab CI, Jenkins, ArgoCD) run your validation pipeline, and how do you ensure all deployable artifacts pass through them?
- •How do test results from different stages and tools aggregate into a single pass/fail decision for each deployment?
- •Are deployment validation results integrated with your change management records so every change ticket links to its test results?
- •How do you track test coverage metrics across the entire deployment pipeline, and are there components with no automated tests?
Continuous Evidence & Schedules:
- •How do you demonstrate that every production deployment in the past 90 days passed through the full automated validation pipeline?
- •Is deployment validation history (test results, gate decisions, rollback events) available via API or structured logs?
- •How do you measure and demonstrate that test coverage and validation rigor are improving over time rather than degrading?
- •What evidence shows that failed validation gates actually prevented problematic changes from reaching production?
Update History
Ask AI
Configure your API key to use AI features.