OpenAI API

RealtimeResponse

object

The response resource.

idstring

The unique ID of the response.

objectstring

The object type, must be realtime.response.

Allowed values:realtime.response

statusstring

The final status of the response (completed, cancelled, failed, or
incomplete).

Allowed values:completedcancelledfailedincomplete

status_detailsobject

Additional details about the status.

Show Child Parameters
outputarray[object]

The item to add to the conversation.

Show Child Parameters
metadataobject

Set of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.

usageobject

Usage statistics for the Response, this will correspond to billing. A
Realtime API session will maintain a conversation context and append new
Items to the Conversation, thus output from previous turns (text and
audio tokens) will become the input for later turns.

Show Child Parameters
conversation_idstring

Which conversation the response is added to, determined by the conversation
field in the response.create event. If auto, the response will be added to
the default conversation and the value of conversation_id will be an id like
conv_1234. If none, the response will not be added to any conversation and
the value of conversation_id will be null. If responses are being triggered
by server VAD, the response will be added to the default conversation, thus
the conversation_id will be an id like conv_1234.

voicestring

The voice the model used to respond.
Current voice options are alloy, ash, ballad, coral, echo sage,
shimmer and verse.

Allowed values:alloyashballadcoralechosageshimmerverse

modalitiesarray[string]

The set of modalities the model used to respond. If there are multiple modalities,
the model will pick one, for example if modalities is ["text", "audio"], the model
could be responding in either text or audio.

Allowed values:textaudio

output_audio_formatstring

The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw.

Allowed values:pcm16g711_ulawg711_alaw

temperaturenumber

Sampling temperature for the model, limited to [0.6, 1.2]. Defaults to 0.8.

max_output_tokensOne Of

Maximum number of output tokens for a single assistant response,
inclusive of tool calls, that was used in this response.

Variant 1integer
Example

RealtimeResponseCreateParams

object

Create a new Realtime response with these parameters

modalitiesarray[string]

The set of modalities the model can respond with. To disable audio,
set this to [“text”].

Allowed values:textaudio

instructionsstring

The default system instructions (i.e. system message) prepended to model
calls. This field allows the client to guide the model on desired
responses. The model can be instructed on response content and format,
(e.g. “be extremely succinct”, “act friendly”, “here are examples of good
responses”) and on audio behavior (e.g. “talk quickly”, “inject emotion
into your voice”, “laugh frequently”). The instructions are not guaranteed
to be followed by the model, but they provide guidance to the model on the
desired behavior.

Note that the server sets default instructions which will be used if this
field is not set and are visible in the session.created event at the
start of the session.

voicestring

The voice the model uses to respond. Voice cannot be changed during the
session once the model has responded with audio at least once. Current
voice options are alloy, ash, ballad, coral, echo sage,
shimmer and verse.

Allowed values:alloyashballadcoralechosageshimmerverse

output_audio_formatstring

The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw.

Allowed values:pcm16g711_ulawg711_alaw

toolsarray[object]

Tools (functions) available to the model.

Show Child Parameters
tool_choicestring

How the model chooses tools. Options are auto, none, required, or
specify a function, like {"type": "function", "function": {"name": "my_function"}}.

temperaturenumber

Sampling temperature for the model, limited to [0.6, 1.2]. Defaults to 0.8.

max_response_output_tokensOne Of

Maximum number of output tokens for a single assistant response,
inclusive of tool calls. Provide an integer between 1 and 4096 to
limit output tokens, or inf for the maximum available tokens for a
given model. Defaults to inf.

Variant 1integer
conversationOne Of

Controls which conversation the response is added to. Currently supports
auto and none, with auto as the default value. The auto value
means that the contents of the response will be added to the default
conversation. Set this to none to create an out-of-band response which
will not add items to default conversation.

Variant 1string
metadataobject

Set of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.

inputarray[object]

The item to add to the conversation.

Show Child Parameters
Example

RealtimeServerEventConversationCreated

object

Returned when a conversation is created. Emitted right after session creation.

event_idstringrequired

The unique ID of the server event.

typestringrequired

The event type, must be conversation.created.

Allowed values:conversation.created

conversationobjectrequired

The conversation resource.

Show Child Parameters
Example

RealtimeServerEventConversationItemCreated

object

Returned when a conversation item is created. There are several scenarios that
produce this event:

  • The server is generating a Response, which if successful will produce
    either one or two Items, which will be of type message
    (role assistant) or type function_call.
  • The input audio buffer has been committed, either by the client or the
    server (in server_vad mode). The server will take the content of the
    input audio buffer and add it to a new user message Item.
  • The client has sent a conversation.item.create event to add a new Item
    to the Conversation.
event_idstringrequired

The unique ID of the server event.

typestringrequired

The event type, must be conversation.item.created.

Allowed values:conversation.item.created

previous_item_idstringrequired

The ID of the preceding item in the Conversation context, allows the
client to understand the order of the conversation.

itemobjectrequired

The item to add to the conversation.

Show Child Parameters
Example

RealtimeServerEventConversationItemDeleted

object

Returned when an item in the conversation is deleted by the client with a
conversation.item.delete event. This event is used to synchronize the
server’s understanding of the conversation history with the client’s view.

event_idstringrequired

The unique ID of the server event.

typestringrequired

The event type, must be conversation.item.deleted.

Allowed values:conversation.item.deleted

item_idstringrequired

The ID of the item that was deleted.

Example