RealtimeServerEventConversationCreated
objectReturned when a conversation is created. Emitted right after session creation.
The unique ID of the server event.
The event type, must be conversation.created.
Allowed values:conversation.created
The conversation resource.
Show Child Parameters
RealtimeServerEventConversationItemCreated
objectReturned when a conversation item is created. There are several scenarios that
produce this event:
- The server is generating a Response, which if successful will produce
either one or two Items, which will be of typemessage
(roleassistant) or typefunction_call. - The input audio buffer has been committed, either by the client or the
server (inserver_vadmode). The server will take the content of the
input audio buffer and add it to a new user message Item. - The client has sent a
conversation.item.createevent to add a new Item
to the Conversation.
The unique ID of the server event.
The event type, must be conversation.item.created.
Allowed values:conversation.item.created
The ID of the preceding item in the Conversation context, allows the
client to understand the order of the conversation.
The item to add to the conversation.
Show Child Parameters
RealtimeServerEventConversationItemDeleted
objectReturned when an item in the conversation is deleted by the client with a
conversation.item.delete event. This event is used to synchronize the
server’s understanding of the conversation history with the client’s view.
The unique ID of the server event.
The event type, must be conversation.item.deleted.
Allowed values:conversation.item.deleted
The ID of the item that was deleted.
RealtimeServerEventConversationItemInputAudioTranscriptionCompleted
objectThis event is the output of audio transcription for user audio written to the
user audio buffer. Transcription begins when the input audio buffer is
committed by the client or server (in server_vad mode). Transcription runs
asynchronously with Response creation, so this event may come before or after
the Response events.
Realtime API models accept audio natively, and thus input transcription is a
separate process run on a separate ASR (Automatic Speech Recognition) model,
currently always whisper-1. Thus the transcript may diverge somewhat from
the model’s interpretation, and should be treated as a rough guide.
The unique ID of the server event.
The event type, must be
conversation.item.input_audio_transcription.completed.
Allowed values:conversation.item.input_audio_transcription.completed
The ID of the user message item containing the audio.
The index of the content part containing the audio.
The transcribed text.
RealtimeServerEventConversationItemInputAudioTranscriptionFailed
objectReturned when input audio transcription is configured, and a transcription
request for a user message failed. These events are separate from other
error events so that the client can identify the related Item.
The unique ID of the server event.
The event type, must be
conversation.item.input_audio_transcription.failed.
Allowed values:conversation.item.input_audio_transcription.failed
The ID of the user message item.
The index of the content part containing the audio.
Details of the transcription error.