CreateTranscriptionResponseVerboseJson
objectRepresents a verbose json transcription response returned by model, based on the provided input.
The language of the input audio.
The duration of the input audio.
The transcribed text.
Extracted words and their corresponding timestamps.
Show Child Parameters
Segments of the transcribed text and their corresponding details.
Show Child Parameters
CreateTranslationRequest
objectThe audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
Any OfID of the model to use. Only whisper-1 (which is powered by our open source Whisper V2 model) is currently available.
Example:whisper-1
An optional text to guide the model’s style or continue a previous audio segment. The prompt should be in English.
The format of the output, in one of these options: json, text, srt, verbose_json, or vtt.
Allowed values:jsontextsrtverbose_jsonvtt
Default:json
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.
Default:0
CreateTranslationResponseJson
objectCreateTranslationResponseVerboseJson
objectThe language of the output translation (always english).
The duration of the input audio.
The translated text.
Segments of the translated text and their corresponding details.
Show Child Parameters
CreateUploadRequest
objectThe name of the file to upload.
The intended purpose of the uploaded file.
See the documentation on File purposes.
Allowed values:assistantsbatchfine-tunevision
The number of bytes in the file you are uploading.
The MIME type of the file.
This must fall within the supported MIME types for your file purpose. See the supported MIME types for assistants and vision.