Skip to content

converse

Operation

converse async

converse(input: ConverseInput, plugins: list[Plugin] | None = None) -> ConverseOperationOutput

Sends messages to the specified Amazon Bedrock model. Converse provides a consistent interface that works with all models that support messages. This allows you to write code once and use it with different models. If a model has unique inference parameters, you can also pass those unique parameters to the model.

Amazon Bedrock doesn't store any text, images, or documents that you provide as content. The data is only used to generate the response.

You can submit a prompt by including it in the messages field, specifying the modelId of a foundation model or inference profile to run inference on it, and including any other fields that are relevant to your use case.

You can also submit a prompt from Prompt management by specifying the ARN of the prompt version and including a map of variables to values in the promptVariables field. You can append more messages to the prompt by using the messages field. If you use a prompt from Prompt management, you can't include the following fields in the request: additionalModelRequestFields, inferenceConfig, system, or toolConfig. Instead, these fields must be defined through Prompt management. For more information, see Use a prompt from Prompt management.

For information about the Converse API, see Use the Converse API in the Amazon Bedrock User Guide. To use a guardrail, see Use a guardrail with the Converse API in the Amazon Bedrock User Guide. To use a tool with a model, see Tool use (Function calling) in the Amazon Bedrock User Guide

For example code, see Converse API examples in the Amazon Bedrock User Guide.

This operation requires permission for the bedrock:InvokeModel action.

Warning

To deny all inference access to resources that you specify in the modelId field, you need to deny access to the bedrock:InvokeModel and bedrock:InvokeModelWithResponseStream actions. Doing this also denies access to the resource through the base inference actions (InvokeModel and InvokeModelWithResponseStream). For more information see Deny access for inference on specific models.

For troubleshooting some of the common errors you might encounter when using the Converse API, see Troubleshooting Amazon Bedrock API Error Codes in the Amazon Bedrock User Guide

Parameters:

Name Type Description Default
input ConverseInput

An instance of ConverseInput.

required
plugins list[Plugin] | None

A list of callables that modify the configuration dynamically. Changes made by these plugins only apply for the duration of the operation execution and will not affect any other operation invocations.

None

Returns:

Type Description
ConverseOperationOutput

An instance of ConverseOperationOutput.

Source code in src/aws_sdk_bedrock_runtime/client.py
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
async def converse(
    self, input: ConverseInput, plugins: list[Plugin] | None = None
) -> ConverseOperationOutput:
    """Sends messages to the specified Amazon Bedrock model. `Converse`
    provides a consistent interface that works with all models that support
    messages. This allows you to write code once and use it with different
    models. If a model has unique inference parameters, you can also pass
    those unique parameters to the model.

    Amazon Bedrock doesn't store any text, images, or documents that you
    provide as content. The data is only used to generate the response.

    You can submit a prompt by including it in the `messages` field,
    specifying the `modelId` of a foundation model or inference profile to
    run inference on it, and including any other fields that are relevant to
    your use case.

    You can also submit a prompt from Prompt management by specifying the
    ARN of the prompt version and including a map of variables to values in
    the `promptVariables` field. You can append more messages to the prompt
    by using the `messages` field. If you use a prompt from Prompt
    management, you can't include the following fields in the request:
    `additionalModelRequestFields`, `inferenceConfig`, `system`, or
    `toolConfig`. Instead, these fields must be defined through Prompt
    management. For more information, see [Use a prompt from Prompt
    management](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-management-use.html).

    For information about the Converse API, see *Use the Converse API* in
    the *Amazon Bedrock User Guide*. To use a guardrail, see *Use a
    guardrail with the Converse API* in the *Amazon Bedrock User Guide*. To
    use a tool with a model, see *Tool use (Function calling)* in the
    *Amazon Bedrock User Guide*

    For example code, see *Converse API examples* in the *Amazon Bedrock
    User Guide*.

    This operation requires permission for the `bedrock:InvokeModel` action.

    Warning:
        To deny all inference access to resources that you specify in the
        modelId field, you need to deny access to the `bedrock:InvokeModel` and
        `bedrock:InvokeModelWithResponseStream` actions. Doing this also denies
        access to the resource through the base inference actions
        ([InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html)
        and
        [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html)).
        For more information see [Deny access for inference on specific
        models](https://docs.aws.amazon.com/bedrock/latest/userguide/security_iam_id-based-policy-examples.html#security_iam_id-based-policy-examples-deny-inference).

    For troubleshooting some of the common errors you might encounter when
    using the `Converse` API, see [Troubleshooting Amazon Bedrock API Error
    Codes](https://docs.aws.amazon.com/bedrock/latest/userguide/troubleshooting-api-error-codes.html)
    in the Amazon Bedrock User Guide

    Args:
        input:
            An instance of `ConverseInput`.
        plugins:
            A list of callables that modify the configuration dynamically.
            Changes made by these plugins only apply for the duration of the
            operation execution and will not affect any other operation
            invocations.

    Returns:
        An instance of `ConverseOperationOutput`.
    """
    operation_plugins: list[Plugin] = []
    if plugins:
        operation_plugins.extend(plugins)
    config = deepcopy(self._config)
    for plugin in operation_plugins:
        plugin(config)
    if config.protocol is None or config.transport is None:
        raise ExpectationNotMetError(
            "protocol and transport MUST be set on the config to make calls."
        )
    pipeline = RequestPipeline(protocol=config.protocol, transport=config.transport)
    call = ClientCall(
        input=input,
        operation=CONVERSE,
        context=TypedProperties({"config": config}),
        interceptor=InterceptorChain(config.interceptors),
        auth_scheme_resolver=config.auth_scheme_resolver,
        supported_auth_schemes=config.auth_schemes,
        endpoint_resolver=config.endpoint_resolver,
        retry_strategy=config.retry_strategy,
    )

    return await pipeline(call)

Input

ConverseInput dataclass

Dataclass for ConverseInput structure.

Source code in src/aws_sdk_bedrock_runtime/models.py
9456
9457
9458
9459
9460
9461
9462
9463
9464
9465
9466
9467
9468
9469
9470
9471
9472
9473
9474
9475
9476
9477
9478
9479
9480
9481
9482
9483
9484
9485
9486
9487
9488
9489
9490
9491
9492
9493
9494
9495
9496
9497
9498
9499
9500
9501
9502
9503
9504
9505
9506
9507
9508
9509
9510
9511
9512
9513
9514
9515
9516
9517
9518
9519
9520
9521
9522
9523
9524
9525
9526
9527
9528
9529
9530
9531
9532
9533
9534
9535
9536
9537
9538
9539
9540
9541
9542
9543
9544
9545
9546
9547
9548
9549
9550
9551
9552
9553
9554
9555
9556
9557
9558
9559
9560
9561
9562
9563
9564
9565
9566
9567
9568
9569
9570
9571
9572
9573
9574
9575
9576
9577
9578
9579
9580
9581
9582
9583
9584
9585
9586
9587
9588
9589
9590
9591
9592
9593
9594
9595
9596
9597
9598
9599
9600
9601
9602
9603
9604
9605
9606
9607
9608
9609
9610
9611
9612
9613
9614
9615
9616
9617
9618
9619
9620
9621
9622
9623
9624
9625
9626
9627
9628
9629
9630
9631
9632
9633
9634
9635
9636
9637
9638
9639
9640
9641
9642
9643
9644
9645
9646
9647
9648
9649
9650
9651
9652
9653
9654
9655
9656
9657
9658
9659
9660
9661
9662
9663
9664
9665
9666
9667
9668
9669
9670
9671
9672
9673
9674
9675
9676
9677
9678
9679
9680
9681
9682
9683
9684
9685
9686
9687
9688
9689
9690
9691
9692
9693
9694
9695
9696
9697
9698
9699
9700
9701
9702
9703
9704
9705
@dataclass(kw_only=True)
class ConverseInput:
    """Dataclass for ConverseInput structure."""

    model_id: str | None = None
    """Specifies the model or throughput with which to run inference, or the
    prompt resource to use in inference. The value depends on the resource
    that you use:

    - If you use a base model, specify the model ID or its ARN. For a list
      of model IDs for base models, see [Amazon Bedrock base model IDs
      (on-demand
      throughput)](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns)
      in the Amazon Bedrock User Guide.

    - If you use an inference profile, specify the inference profile ID or
      its ARN. For a list of inference profile IDs, see [Supported Regions
      and models for cross-region
      inference](https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference-support.html)
      in the Amazon Bedrock User Guide.

    - If you use a provisioned model, specify the ARN of the Provisioned
      Throughput. For more information, see [Run inference using a
      Provisioned
      Throughput](https://docs.aws.amazon.com/bedrock/latest/userguide/prov-thru-use.html)
      in the Amazon Bedrock User Guide.

    - If you use a custom model, first purchase Provisioned Throughput for
      it. Then specify the ARN of the resulting provisioned model. For more
      information, see [Use a custom model in Amazon
      Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-use.html)
      in the Amazon Bedrock User Guide.

    - To include a prompt that was defined in [Prompt
      management](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-management.html),
      specify the ARN of the prompt version to use.

    The Converse API doesn't support [imported
    models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-import-model.html).
    """

    messages: list[Message] | None = None
    """The messages that you want to send to the model."""

    system: list[SystemContentBlock] | None = None
    """A prompt that provides instructions or context to the model about the
    task it should perform, or the persona it should adopt during the
    conversation.
    """

    inference_config: InferenceConfiguration | None = None
    """Inference parameters to pass to the model. `Converse` and
    `ConverseStream` support a base set of inference parameters. If you need
    to pass additional parameters that the model supports, use the
    `additionalModelRequestFields` request field.
    """

    tool_config: ToolConfiguration | None = None
    """Configuration information for the tools that the model can use when
    generating a response.

    For information about models that support tool use, see [Supported
    models and model
    features](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html#conversation-inference-supported-models-features).
    """

    guardrail_config: GuardrailConfiguration | None = None
    """Configuration information for a guardrail that you want to use in the
    request. If you include `guardContent` blocks in the `content` field in
    the `messages` field, the guardrail operates only on those messages. If
    you include no `guardContent` blocks, the guardrail operates on all
    messages in the request body and in any included prompt resource.
    """

    additional_model_request_fields: Document | None = None
    """Additional inference parameters that the model supports, beyond the base
    set of inference parameters that `Converse` and `ConverseStream` support
    in the `inferenceConfig` field. For more information, see [Model
    parameters](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html).
    """

    prompt_variables: dict[str, PromptVariableValues] | None = field(
        repr=False, default=None
    )
    """Contains a map of variables in a prompt from Prompt management to
    objects containing the values to fill in for them when running model
    invocation. This field is ignored if you don't specify a prompt
    resource in the `modelId` field.
    """

    additional_model_response_field_paths: list[str] | None = None
    """Additional model parameters field paths to return in the response.
    `Converse` and `ConverseStream` return the requested fields as a JSON
    Pointer object in the `additionalModelResponseFields` field. The
    following is example JSON for `additionalModelResponseFieldPaths`.

    `[ "/stop_sequence" ]`

    For information about the JSON Pointer syntax, see the [Internet
    Engineering Task Force
    (IETF)](https://datatracker.ietf.org/doc/html/rfc6901) documentation.

    `Converse` and `ConverseStream` reject an empty JSON Pointer or
    incorrectly structured JSON Pointer with a `400` error code. if the JSON
    Pointer is valid, but the requested field is not in the model response,
    it is ignored by `Converse`.
    """

    request_metadata: dict[str, str] | None = field(repr=False, default=None)
    """Key-value pairs that you can use to filter invocation logs."""

    performance_config: PerformanceConfiguration | None = None
    """Model performance settings for the request."""

    def serialize(self, serializer: ShapeSerializer):
        serializer.write_struct(_SCHEMA_CONVERSE_INPUT, self)

    def serialize_members(self, serializer: ShapeSerializer):
        if self.model_id is not None:
            serializer.write_string(
                _SCHEMA_CONVERSE_INPUT.members["modelId"], self.model_id
            )

        if self.messages is not None:
            _serialize_messages(
                serializer, _SCHEMA_CONVERSE_INPUT.members["messages"], self.messages
            )

        if self.system is not None:
            _serialize_system_content_blocks(
                serializer, _SCHEMA_CONVERSE_INPUT.members["system"], self.system
            )

        if self.inference_config is not None:
            serializer.write_struct(
                _SCHEMA_CONVERSE_INPUT.members["inferenceConfig"], self.inference_config
            )

        if self.tool_config is not None:
            serializer.write_struct(
                _SCHEMA_CONVERSE_INPUT.members["toolConfig"], self.tool_config
            )

        if self.guardrail_config is not None:
            serializer.write_struct(
                _SCHEMA_CONVERSE_INPUT.members["guardrailConfig"], self.guardrail_config
            )

        if self.additional_model_request_fields is not None:
            serializer.write_document(
                _SCHEMA_CONVERSE_INPUT.members["additionalModelRequestFields"],
                self.additional_model_request_fields,
            )

        if self.prompt_variables is not None:
            _serialize_prompt_variable_map(
                serializer,
                _SCHEMA_CONVERSE_INPUT.members["promptVariables"],
                self.prompt_variables,
            )

        if self.additional_model_response_field_paths is not None:
            _serialize_additional_model_response_field_paths(
                serializer,
                _SCHEMA_CONVERSE_INPUT.members["additionalModelResponseFieldPaths"],
                self.additional_model_response_field_paths,
            )

        if self.request_metadata is not None:
            _serialize_request_metadata(
                serializer,
                _SCHEMA_CONVERSE_INPUT.members["requestMetadata"],
                self.request_metadata,
            )

        if self.performance_config is not None:
            serializer.write_struct(
                _SCHEMA_CONVERSE_INPUT.members["performanceConfig"],
                self.performance_config,
            )

    @classmethod
    def deserialize(cls, deserializer: ShapeDeserializer) -> Self:
        return cls(**cls.deserialize_kwargs(deserializer))

    @classmethod
    def deserialize_kwargs(cls, deserializer: ShapeDeserializer) -> dict[str, Any]:
        kwargs: dict[str, Any] = {}

        def _consumer(schema: Schema, de: ShapeDeserializer) -> None:
            match schema.expect_member_index():
                case 0:
                    kwargs["model_id"] = de.read_string(
                        _SCHEMA_CONVERSE_INPUT.members["modelId"]
                    )

                case 1:
                    kwargs["messages"] = _deserialize_messages(
                        de, _SCHEMA_CONVERSE_INPUT.members["messages"]
                    )

                case 2:
                    kwargs["system"] = _deserialize_system_content_blocks(
                        de, _SCHEMA_CONVERSE_INPUT.members["system"]
                    )

                case 3:
                    kwargs["inference_config"] = InferenceConfiguration.deserialize(de)

                case 4:
                    kwargs["tool_config"] = ToolConfiguration.deserialize(de)

                case 5:
                    kwargs["guardrail_config"] = GuardrailConfiguration.deserialize(de)

                case 6:
                    kwargs["additional_model_request_fields"] = de.read_document(
                        _SCHEMA_CONVERSE_INPUT.members["additionalModelRequestFields"]
                    )

                case 7:
                    kwargs["prompt_variables"] = _deserialize_prompt_variable_map(
                        de, _SCHEMA_CONVERSE_INPUT.members["promptVariables"]
                    )

                case 8:
                    kwargs["additional_model_response_field_paths"] = (
                        _deserialize_additional_model_response_field_paths(
                            de,
                            _SCHEMA_CONVERSE_INPUT.members[
                                "additionalModelResponseFieldPaths"
                            ],
                        )
                    )

                case 9:
                    kwargs["request_metadata"] = _deserialize_request_metadata(
                        de, _SCHEMA_CONVERSE_INPUT.members["requestMetadata"]
                    )

                case 10:
                    kwargs["performance_config"] = PerformanceConfiguration.deserialize(
                        de
                    )

                case _:
                    logger.debug("Unexpected member schema: %s", schema)

        deserializer.read_struct(_SCHEMA_CONVERSE_INPUT, consumer=_consumer)
        return kwargs

Attributes

additional_model_request_fields class-attribute instance-attribute
additional_model_request_fields: Document | None = None

Additional inference parameters that the model supports, beyond the base set of inference parameters that Converse and ConverseStream support in the inferenceConfig field. For more information, see Model parameters.

additional_model_response_field_paths class-attribute instance-attribute
additional_model_response_field_paths: list[str] | None = None

Additional model parameters field paths to return in the response. Converse and ConverseStream return the requested fields as a JSON Pointer object in the additionalModelResponseFields field. The following is example JSON for additionalModelResponseFieldPaths.

[ "/stop_sequence" ]

For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation.

Converse and ConverseStream reject an empty JSON Pointer or incorrectly structured JSON Pointer with a 400 error code. if the JSON Pointer is valid, but the requested field is not in the model response, it is ignored by Converse.

guardrail_config class-attribute instance-attribute
guardrail_config: GuardrailConfiguration | None = None

Configuration information for a guardrail that you want to use in the request. If you include guardContent blocks in the content field in the messages field, the guardrail operates only on those messages. If you include no guardContent blocks, the guardrail operates on all messages in the request body and in any included prompt resource.

inference_config class-attribute instance-attribute
inference_config: InferenceConfiguration | None = None

Inference parameters to pass to the model. Converse and ConverseStream support a base set of inference parameters. If you need to pass additional parameters that the model supports, use the additionalModelRequestFields request field.

messages class-attribute instance-attribute
messages: list[Message] | None = None

The messages that you want to send to the model.

model_id class-attribute instance-attribute
model_id: str | None = None

Specifies the model or throughput with which to run inference, or the prompt resource to use in inference. The value depends on the resource that you use:

The Converse API doesn't support imported models.

performance_config class-attribute instance-attribute
performance_config: PerformanceConfiguration | None = None

Model performance settings for the request.

prompt_variables class-attribute instance-attribute
prompt_variables: dict[str, PromptVariableValues] | None = field(repr=False, default=None)

Contains a map of variables in a prompt from Prompt management to objects containing the values to fill in for them when running model invocation. This field is ignored if you don't specify a prompt resource in the modelId field.

request_metadata class-attribute instance-attribute
request_metadata: dict[str, str] | None = field(repr=False, default=None)

Key-value pairs that you can use to filter invocation logs.

system class-attribute instance-attribute
system: list[SystemContentBlock] | None = None

A prompt that provides instructions or context to the model about the task it should perform, or the persona it should adopt during the conversation.

tool_config class-attribute instance-attribute
tool_config: ToolConfiguration | None = None

Configuration information for the tools that the model can use when generating a response.

For information about models that support tool use, see Supported models and model features.

Output

ConverseOperationOutput dataclass

Dataclass for ConverseOperationOutput structure.

Source code in src/aws_sdk_bedrock_runtime/models.py
10183
10184
10185
10186
10187
10188
10189
10190
10191
10192
10193
10194
10195
10196
10197
10198
10199
10200
10201
10202
10203
10204
10205
10206
10207
10208
10209
10210
10211
10212
10213
10214
10215
10216
10217
10218
10219
10220
10221
10222
10223
10224
10225
10226
10227
10228
10229
10230
10231
10232
10233
10234
10235
10236
10237
10238
10239
10240
10241
10242
10243
10244
10245
10246
10247
10248
10249
10250
10251
10252
10253
10254
10255
10256
10257
10258
10259
10260
10261
10262
10263
10264
10265
10266
10267
10268
10269
10270
10271
10272
10273
10274
10275
10276
10277
10278
10279
10280
10281
10282
10283
10284
10285
10286
10287
10288
10289
@dataclass(kw_only=True)
class ConverseOperationOutput:
    """Dataclass for ConverseOperationOutput structure."""

    output: ConverseOutput
    """The result from the call to `Converse`."""

    stop_reason: str
    """The reason why the model stopped generating output."""

    usage: TokenUsage
    """The total number of tokens used in the call to `Converse`. The total
    includes the tokens input to the model and the tokens generated by the
    model.
    """

    metrics: ConverseMetrics
    """Metrics for the call to `Converse`."""

    additional_model_response_fields: Document | None = None
    """Additional fields in the response that are unique to the model."""

    trace: ConverseTrace | None = None
    """A trace object that contains information about the Guardrail behavior."""

    performance_config: PerformanceConfiguration | None = None
    """Model performance settings for the request."""

    def serialize(self, serializer: ShapeSerializer):
        serializer.write_struct(_SCHEMA_CONVERSE_OPERATION_OUTPUT, self)

    def serialize_members(self, serializer: ShapeSerializer):
        serializer.write_struct(
            _SCHEMA_CONVERSE_OPERATION_OUTPUT.members["output"], self.output
        )
        serializer.write_string(
            _SCHEMA_CONVERSE_OPERATION_OUTPUT.members["stopReason"], self.stop_reason
        )
        serializer.write_struct(
            _SCHEMA_CONVERSE_OPERATION_OUTPUT.members["usage"], self.usage
        )
        serializer.write_struct(
            _SCHEMA_CONVERSE_OPERATION_OUTPUT.members["metrics"], self.metrics
        )
        if self.additional_model_response_fields is not None:
            serializer.write_document(
                _SCHEMA_CONVERSE_OPERATION_OUTPUT.members[
                    "additionalModelResponseFields"
                ],
                self.additional_model_response_fields,
            )

        if self.trace is not None:
            serializer.write_struct(
                _SCHEMA_CONVERSE_OPERATION_OUTPUT.members["trace"], self.trace
            )

        if self.performance_config is not None:
            serializer.write_struct(
                _SCHEMA_CONVERSE_OPERATION_OUTPUT.members["performanceConfig"],
                self.performance_config,
            )

    @classmethod
    def deserialize(cls, deserializer: ShapeDeserializer) -> Self:
        return cls(**cls.deserialize_kwargs(deserializer))

    @classmethod
    def deserialize_kwargs(cls, deserializer: ShapeDeserializer) -> dict[str, Any]:
        kwargs: dict[str, Any] = {}

        def _consumer(schema: Schema, de: ShapeDeserializer) -> None:
            match schema.expect_member_index():
                case 0:
                    kwargs["output"] = _ConverseOutputDeserializer().deserialize(de)

                case 1:
                    kwargs["stop_reason"] = de.read_string(
                        _SCHEMA_CONVERSE_OPERATION_OUTPUT.members["stopReason"]
                    )

                case 2:
                    kwargs["usage"] = TokenUsage.deserialize(de)

                case 3:
                    kwargs["metrics"] = ConverseMetrics.deserialize(de)

                case 4:
                    kwargs["additional_model_response_fields"] = de.read_document(
                        _SCHEMA_CONVERSE_OPERATION_OUTPUT.members[
                            "additionalModelResponseFields"
                        ]
                    )

                case 5:
                    kwargs["trace"] = ConverseTrace.deserialize(de)

                case 6:
                    kwargs["performance_config"] = PerformanceConfiguration.deserialize(
                        de
                    )

                case _:
                    logger.debug("Unexpected member schema: %s", schema)

        deserializer.read_struct(_SCHEMA_CONVERSE_OPERATION_OUTPUT, consumer=_consumer)
        return kwargs

Attributes

additional_model_response_fields class-attribute instance-attribute
additional_model_response_fields: Document | None = None

Additional fields in the response that are unique to the model.

metrics instance-attribute
metrics: ConverseMetrics

Metrics for the call to Converse.

output instance-attribute

The result from the call to Converse.

performance_config class-attribute instance-attribute
performance_config: PerformanceConfiguration | None = None

Model performance settings for the request.

stop_reason instance-attribute
stop_reason: str

The reason why the model stopped generating output.

trace class-attribute instance-attribute
trace: ConverseTrace | None = None

A trace object that contains information about the Guardrail behavior.

usage instance-attribute
usage: TokenUsage

The total number of tokens used in the call to Converse. The total includes the tokens input to the model and the tokens generated by the model.

Errors