OpenAI API

RealtimeServerEventConversationCreated

object

Returned when a conversation is created. Emitted right after session creation.

event_idstringrequired

The unique ID of the server event.

typestringrequired

The event type, must be conversation.created.

Allowed values:conversation.created

conversationobjectrequired

The conversation resource.

Show Child Parameters
Example

RealtimeServerEventConversationItemCreated

object

Returned when a conversation item is created. There are several scenarios that
produce this event:

  • The server is generating a Response, which if successful will produce
    either one or two Items, which will be of type message
    (role assistant) or type function_call.
  • The input audio buffer has been committed, either by the client or the
    server (in server_vad mode). The server will take the content of the
    input audio buffer and add it to a new user message Item.
  • The client has sent a conversation.item.create event to add a new Item
    to the Conversation.
event_idstringrequired

The unique ID of the server event.

typestringrequired

The event type, must be conversation.item.created.

Allowed values:conversation.item.created

previous_item_idstringrequired

The ID of the preceding item in the Conversation context, allows the
client to understand the order of the conversation.

itemobjectrequired

The item to add to the conversation.

Show Child Parameters
Example

RealtimeServerEventConversationItemDeleted

object

Returned when an item in the conversation is deleted by the client with a
conversation.item.delete event. This event is used to synchronize the
server’s understanding of the conversation history with the client’s view.

event_idstringrequired

The unique ID of the server event.

typestringrequired

The event type, must be conversation.item.deleted.

Allowed values:conversation.item.deleted

item_idstringrequired

The ID of the item that was deleted.

Example

RealtimeServerEventConversationItemInputAudioTranscriptionCompleted

object

This event is the output of audio transcription for user audio written to the
user audio buffer. Transcription begins when the input audio buffer is
committed by the client or server (in server_vad mode). Transcription runs
asynchronously with Response creation, so this event may come before or after
the Response events.

Realtime API models accept audio natively, and thus input transcription is a
separate process run on a separate ASR (Automatic Speech Recognition) model,
currently always whisper-1. Thus the transcript may diverge somewhat from
the model’s interpretation, and should be treated as a rough guide.

event_idstringrequired

The unique ID of the server event.

typestringrequired

The event type, must be
conversation.item.input_audio_transcription.completed.

Allowed values:conversation.item.input_audio_transcription.completed

item_idstringrequired

The ID of the user message item containing the audio.

content_indexintegerrequired

The index of the content part containing the audio.

transcriptstringrequired

The transcribed text.

Example

RealtimeServerEventConversationItemInputAudioTranscriptionFailed

object

Returned when input audio transcription is configured, and a transcription
request for a user message failed. These events are separate from other
error events so that the client can identify the related Item.

event_idstringrequired

The unique ID of the server event.

typestringrequired

The event type, must be
conversation.item.input_audio_transcription.failed.

Allowed values:conversation.item.input_audio_transcription.failed

item_idstringrequired

The ID of the user message item.

content_indexintegerrequired

The index of the content part containing the audio.

errorobjectrequired

Details of the transcription error.

Show Child Parameters
Example