AWS::Bedrock::GuardrailCreates a guardrail to detect and filter harmful content in your generative AI application. Amazon Bedrock Guardrails provides the following safeguards (also known as policies) to detect and filter harmful content: - *Content filters* - Detect and filter harmful text or image content in input prompts or model responses. Filtering is done based on detection of certain predefined harmful content categories: Hate, Insults, Sexual, Violence, Misconduct and Prompt Attack. You also can adjust the filter strength for each of these categories. - *Denied topics* - Define a set of topics that are undesirable in the context of your application. The filter will help block them if detected in user queries or model responses. - *Word filters* - Configure filters to help block undesirable words, phrases, and profanity (exact match). Such words can include offensive terms, competitor names, etc. - *Sensitive information filters* - Configure filters to help block or mask sensitive information, such as personally identifiable information (PII), or custom regex in user inputs and model responses. Blocking or masking is done based on probabilistic detection of sensitive information in standard formats in entities such as SSN number, Date of Birth, address, etc. This also allows configuring regular expression based detection of patterns for identifiers. - *Contextual grounding check* - Help detect and filter hallucinations in model responses based on grounding in a source and relevance to the user query. For more information, see [How Amazon Bedrock Guardrails works](https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-how.html) .
import { CfnGuardrail } from 'aws-cdk-lib/aws-bedrock';Or use the module namespace:
import * as bedrock from 'aws-cdk-lib/aws-bedrock';
// bedrock.CfnGuardrailConfiguration passed to the constructor as CfnGuardrailProps.
blockedInputMessagingRequiredstringThe message to return when the guardrail blocks a prompt.
blockedOutputsMessagingRequiredstringThe message to return when the guardrail blocks a model response.
nameRequiredstringThe name of the guardrail.
automatedReasoningPolicyConfigOptionalIResolvable | AutomatedReasoningPolicyConfigPropertyConfiguration settings for integrating Automated Reasoning policies with Amazon Bedrock Guardrails.
contentPolicyConfigOptionalIResolvable | ContentPolicyConfigPropertyThe content filter policies to configure for the guardrail.
contextualGroundingPolicyConfigOptionalIResolvable | ContextualGroundingPolicyConfigPropertyContextual grounding policy config for a guardrail.
crossRegionConfigOptionalIResolvable | GuardrailCrossRegionConfigPropertyThe system-defined guardrail profile that you're using with your guardrail. Guardrail profiles define the destination AWS Regions where guardrail inference requests can be automatically routed. Using guardrail profiles helps maintain guardrail performance and reliability when demand increases. For more information, see the [Amazon Bedrock User Guide](https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-cross-region.html) .
descriptionOptionalstringA description of the guardrail.
kmsKeyArnOptionalstringThe ARN of the AWS key that you use to encrypt the guardrail.
sensitiveInformationPolicyConfigOptionalIResolvable | SensitiveInformationPolicyConfigPropertyThe sensitive information policy to configure for the guardrail.
tagsOptionalCfnTag[]The tags that you want to attach to the guardrail.
topicPolicyConfigOptionalIResolvable | TopicPolicyConfigPropertyThe topic policies to configure for the guardrail.
wordPolicyConfigOptionalIResolvable | WordPolicyConfigPropertyThe word policy you configure for the guardrail.
This L1 construct maps directly to the following CloudFormation resource type.
Our bi-weekly newsletter teaches hands-on AWS fundamentals. No certification fluff - just practical knowledge.
Subscribe to Newsletteraws-bedrockAWS::Bedrock::Guardrail