Under active development Content is continuously updated and improved

Guardrails & Responsible AI

Content filtering, PII detection, prompt injection protection, and responsible AI practices.

Under Construction: This guidance is being actively developed and verified. Content may change.

Authoritative Sources

Key guidance documents from authoritative organizations. Click to view the original source.

Bedrock Guardrails provides configurable safety and privacy safeguards across models. It supports content filters, denied topics, word filters, sensitive information filters, and grounding checks to help reduce harmful or irrelevant outputs in applications.

Configuration Examples(2)

·

Kwatra & Kaushik (Packt, 2024) address three categories of ethical risk in Bedrock workloads: veracity (hallucinations mitigated via prompt engineering, RAG, and temperature tuning), intellectual property (training-data provenance, content filtering, and watermarking), and safety/toxicity (guardrails, data curation, and responsible AI policies). The authors recommend combining Bedrock Guardrails with organizational responsible-AI governance frameworks.

Customer Configuration Responsibilities

Configuration tasks the customer owns in the shared responsibility model. Use the verification commands below to validate settings.

5. Guardrails Configuration

Apply content safety and privacy safeguards to model interactions.

Content filters

Configure input/output strength for content categories.

AI RMF MEASURE-2.1

Denied topics

Define disallowed topics with definitions and examples.

AI RMF MEASURE-2.4

Sensitive information filters

Configure PII detection and custom regex patterns.

AI RMF MANAGE-2.2

Word filters

Block specific terms and managed word lists.

AI RMF MEASURE-2.4

Contextual grounding

Set grounding and relevance thresholds for RAG outputs.

AI RMF MEASURE-2.1

Blocked messaging

Define custom responses for blocked inputs or outputs.

AI RMF GOVERN-2.2

Guardrail enforcement

Configure account or organization enforcement of guardrails.

AI RMF GOVERN-1.2

Cross-account guardrail sharing

Use resource-based policies to share guardrails across accounts.

AC-3 AC-6

6. Application Security (Prompt Injection Protection)

Secure your application code and prompt handling around Bedrock.

Input validation

Validate and sanitize user input before invoking Bedrock.

SI-10

Secure coding practices

Use parameterized queries and avoid unsafe string concatenation.

SA-11

Security testing

Perform SAST, DAST, and penetration testing for AI workloads.

RA-5 SA-11

Agent pre-processing prompts

Tune pre-processing templates to classify and sanitize inputs.

AI RMF MANAGE-2.2

System prompts and scope

Define what agents can and cannot do in system prompts.

AI RMF GOVERN-2.2

SDK/library updates

Keep Bedrock SDKs and dependencies current with patches.

SI-2

Verification Commands

Commands and queries for testing and verifying security configurations.

Guardrails

3 commands
List configured guardrails CLI
aws bedrock list-guardrails
Get guardrail configuration by ID CLI
aws bedrock get-guardrail --guardrail-identifier GUARDRAIL_ID
List enforced guardrail configurations CLI
aws bedrock list-enforced-guardrails-configuration