converse_stream¶
Operation¶
converse_stream
async
¶
converse_stream(input: ConverseStreamInput, plugins: list[Plugin] | None = None) -> OutputEventStream[ConverseStreamOutput, ConverseStreamOperationOutput]
Sends messages to the specified Amazon Bedrock model and returns the
response in a stream. ConverseStream provides a consistent API that
works with all Amazon Bedrock models that support messages. This allows
you to write code once and use it with different models. Should a model
have unique inference parameters, you can also pass those unique
parameters to the model.
To find out if a model supports streaming, call
GetFoundationModel
and check the responseStreamingSupported field in the response.
Note
The CLI doesn't support streaming operations in Amazon Bedrock,
including ConverseStream.
Amazon Bedrock doesn't store any text, images, or documents that you provide as content. The data is only used to generate the response.
You can submit a prompt by including it in the messages field,
specifying the modelId of a foundation model or inference profile to
run inference on it, and including any other fields that are relevant to
your use case.
You can also submit a prompt from Prompt management by specifying the
ARN of the prompt version and including a map of variables to values in
the promptVariables field. You can append more messages to the prompt
by using the messages field. If you use a prompt from Prompt
management, you can't include the following fields in the request:
additionalModelRequestFields, inferenceConfig, system, or
toolConfig. Instead, these fields must be defined through Prompt
management. For more information, see Use a prompt from Prompt
management.
For information about the Converse API, see Use the Converse API in the Amazon Bedrock User Guide. To use a guardrail, see Use a guardrail with the Converse API in the Amazon Bedrock User Guide. To use a tool with a model, see Tool use (Function calling) in the Amazon Bedrock User Guide
For example code, see Conversation streaming example in the Amazon Bedrock User Guide.
This operation requires permission for the
bedrock:InvokeModelWithResponseStream action.
Warning
To deny all inference access to resources that you specify in the
modelId field, you need to deny access to the bedrock:InvokeModel and
bedrock:InvokeModelWithResponseStream actions. Doing this also denies
access to the resource through the base inference actions
(InvokeModel
and
InvokeModelWithResponseStream).
For more information see Deny access for inference on specific
models.
For troubleshooting some of the common errors you might encounter when
using the ConverseStream API, see Troubleshooting Amazon Bedrock API
Error
Codes
in the Amazon Bedrock User Guide
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input
|
ConverseStreamInput
|
An instance of |
required |
plugins
|
list[Plugin] | None
|
A list of callables that modify the configuration dynamically. Changes made by these plugins only apply for the duration of the operation execution and will not affect any other operation invocations. |
None
|
Returns:
| Type | Description |
|---|---|
OutputEventStream[ConverseStreamOutput, ConverseStreamOperationOutput]
|
An |
Source code in src/aws_sdk_bedrock_runtime/client.py
223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 | |
Input¶
ConverseStreamInput
dataclass
¶
Dataclass for ConverseStreamInput structure.
Source code in src/aws_sdk_bedrock_runtime/models.py
10576 10577 10578 10579 10580 10581 10582 10583 10584 10585 10586 10587 10588 10589 10590 10591 10592 10593 10594 10595 10596 10597 10598 10599 10600 10601 10602 10603 10604 10605 10606 10607 10608 10609 10610 10611 10612 10613 10614 10615 10616 10617 10618 10619 10620 10621 10622 10623 10624 10625 10626 10627 10628 10629 10630 10631 10632 10633 10634 10635 10636 10637 10638 10639 10640 10641 10642 10643 10644 10645 10646 10647 10648 10649 10650 10651 10652 10653 10654 10655 10656 10657 10658 10659 10660 10661 10662 10663 10664 10665 10666 10667 10668 10669 10670 10671 10672 10673 10674 10675 10676 10677 10678 10679 10680 10681 10682 10683 10684 10685 10686 10687 10688 10689 10690 10691 10692 10693 10694 10695 10696 10697 10698 10699 10700 10701 10702 10703 10704 10705 10706 10707 10708 10709 10710 10711 10712 10713 10714 10715 10716 10717 10718 10719 10720 10721 10722 10723 10724 10725 10726 10727 10728 10729 10730 10731 10732 10733 10734 10735 10736 10737 10738 10739 10740 10741 10742 10743 10744 10745 10746 10747 10748 10749 10750 10751 10752 10753 10754 10755 10756 10757 10758 10759 10760 10761 10762 10763 10764 10765 10766 10767 10768 10769 10770 10771 10772 10773 10774 10775 10776 10777 10778 10779 10780 10781 10782 10783 10784 10785 10786 10787 10788 10789 10790 10791 10792 10793 10794 10795 10796 10797 10798 10799 10800 10801 10802 10803 10804 10805 10806 10807 10808 10809 10810 10811 10812 10813 10814 10815 10816 10817 10818 10819 10820 10821 10822 10823 10824 10825 10826 10827 10828 10829 10830 10831 10832 10833 10834 10835 | |
Attributes¶
additional_model_request_fields
class-attribute
instance-attribute
¶
additional_model_request_fields: Document | None = None
Additional inference parameters that the model supports, beyond the base
set of inference parameters that Converse and ConverseStream support
in the inferenceConfig field. For more information, see Model
parameters.
additional_model_response_field_paths
class-attribute
instance-attribute
¶
additional_model_response_field_paths: list[str] | None = None
Additional model parameters field paths to return in the response.
Converse and ConverseStream return the requested fields as a JSON
Pointer object in the additionalModelResponseFields field. The
following is example JSON for additionalModelResponseFieldPaths.
[ "/stop_sequence" ]
For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation.
Converse and ConverseStream reject an empty JSON Pointer or
incorrectly structured JSON Pointer with a 400 error code. if the JSON
Pointer is valid, but the requested field is not in the model response,
it is ignored by Converse.
guardrail_config
class-attribute
instance-attribute
¶
guardrail_config: GuardrailStreamConfiguration | None = None
Configuration information for a guardrail that you want to use in the
request. If you include guardContent blocks in the content field in
the messages field, the guardrail operates only on those messages. If
you include no guardContent blocks, the guardrail operates on all
messages in the request body and in any included prompt resource.
inference_config
class-attribute
instance-attribute
¶
inference_config: InferenceConfiguration | None = None
Inference parameters to pass to the model. Converse and
ConverseStream support a base set of inference parameters. If you need
to pass additional parameters that the model supports, use the
additionalModelRequestFields request field.
messages
class-attribute
instance-attribute
¶
messages: list[Message] | None = None
The messages that you want to send to the model.
model_id
class-attribute
instance-attribute
¶
model_id: str | None = None
Specifies the model or throughput with which to run inference, or the prompt resource to use in inference. The value depends on the resource that you use:
-
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
-
If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see Supported Regions and models for cross-region inference in the Amazon Bedrock User Guide.
-
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
-
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
-
To include a prompt that was defined in Prompt management, specify the ARN of the prompt version to use.
The Converse API doesn't support imported models.
performance_config
class-attribute
instance-attribute
¶
performance_config: PerformanceConfiguration | None = None
Model performance settings for the request.
prompt_variables
class-attribute
instance-attribute
¶
prompt_variables: dict[str, PromptVariableValues] | None = field(repr=False, default=None)
Contains a map of variables in a prompt from Prompt management to
objects containing the values to fill in for them when running model
invocation. This field is ignored if you don't specify a prompt
resource in the modelId field.
request_metadata
class-attribute
instance-attribute
¶
request_metadata: dict[str, str] | None = field(repr=False, default=None)
Key-value pairs that you can use to filter invocation logs.
system
class-attribute
instance-attribute
¶
system: list[SystemContentBlock] | None = None
A prompt that provides instructions or context to the model about the task it should perform, or the persona it should adopt during the conversation.
tool_config
class-attribute
instance-attribute
¶
tool_config: ToolConfiguration | None = None
Configuration information for the tools that the model can use when generating a response.
For information about models that support streaming tool use, see Supported models and model features.
Output¶
This operation returns an OutputEventStream for server-to-client streaming.
Event Stream Structure¶
Output Event Type¶
Initial Response Structure¶
ConverseStreamOperationOutput
dataclass
¶
Dataclass for ConverseStreamOperationOutput structure.
Source code in src/aws_sdk_bedrock_runtime/models.py
12472 12473 12474 12475 12476 12477 12478 12479 12480 12481 12482 12483 12484 12485 12486 12487 12488 12489 12490 12491 12492 12493 12494 12495 12496 12497 12498 | |