CreateChatCompletionRequest
One Of>= 1 items
Developer-provided instructions that the model should follow, regardless of
messages sent by the user. With o1 models and newer, developer messages
replace the previous system messages.
Show Child Parameters
Any OfID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.
Example:gpt-4o
Whether or not to store the output of this chat completion request for
use in our model distillation or
evals products.
Default:false
o1 and o3-mini models only
Constrains effort on reasoning for
reasoning models.
Currently supported values are low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
Allowed values:lowmediumhigh
Default:medium
Set of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Number between -2.0 and 2.0. Positive values penalize new tokens based on
their existing frequency in the text so far, decreasing the model’s
likelihood to repeat the same line verbatim.
Default:0
>= -2<= 2
Modify the likelihood of specified tokens appearing in the completion.
Accepts a JSON object that maps tokens (specified by their token ID in the
tokenizer) to an associated bias value from -100 to 100. Mathematically,
the bias is added to the logits generated by the model prior to sampling.
The exact effect will vary per model, but values between -1 and 1 should
decrease or increase likelihood of selection; values like -100 or 100
should result in a ban or exclusive selection of the relevant token.
Default:null
Whether to return log probabilities of the output tokens or not. If true,
returns the log probabilities of each output token returned in the
content of message.
Default:false
An integer between 0 and 20 specifying the number of most likely tokens to
return at each token position, each with an associated log probability.
logprobs must be set to true if this parameter is used.
>= 0<= 20
The maximum number of tokens that can be generated in the
chat completion. This value can be used to control
costs for text generated via API.
This value is now deprecated in favor of max_completion_tokens, and is
not compatible with o1 series models.
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
Default:1
>= 1<= 128
Example:1
Output types that you would like the model to generate for this request.
Most models are capable of generating text, which is the default:
["text"]
The gpt-4o-audio-preview model can also be used to generate audio. To
request that this model generate both text and audio responses, you can
use:
["text", "audio"]
Allowed values:textaudio
One OfConfiguration for a Predicted Output,
which can greatly improve response times when large parts of the model
response are known ahead of time. This is most common when you are
regenerating a file with only minor changes to most of the content.
Static predicted output content, such as the content of a text file that is
being regenerated.
Show Child Parameters
Parameters for audio output. Required when audio output is requested with
modalities: ["audio"]. Learn more.
Show Child Parameters
Number between -2.0 and 2.0. Positive values penalize new tokens based on
whether they appear in the text so far, increasing the model’s likelihood
to talk about new topics.
Default:0
>= -2<= 2
One OfAn object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables JSON mode, which ensures
the message the model generates is valid JSON.
Important: when using JSON mode, you must also instruct the model
to produce JSON yourself via a system or user message. Without this, the
model may generate an unending stream of whitespace until the generation
reaches the token limit, resulting in a long-running and seemingly “stuck”
request. Also note that the message content may be partially cut off if
finish_reason="length", which indicates the generation exceeded
max_tokens or the conversation exceeded the max context length.
Show Child Parameters
This feature is in Beta.
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.
Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:
- If set to ‘auto’, and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted.
- If set to ‘auto’, and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarantee.
- If set to ‘default’, the request will be processed using the default service tier with a lower uptime SLA and no latency guarantee.
- When not set, the default behavior is ‘auto’.
Allowed values:autodefault
Default:auto
One OfUp to 4 sequences where the API will stop generating further tokens.
Default:null
If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. Example Python code.
Default:false
Options for streaming response. Only set this when you set stream: true.
Default:null
Show Child Parameters
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
Default:1
>= 0<= 2
Example:1
An alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.
We generally recommend altering this or temperature but not both.
Default:1
>= 0<= 1
Example:1
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
Show Child Parameters
One OfControls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or more tools.
required means the model must call one or more tools.
Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
none is the default when no tools are present. auto is the default if tools are present.
none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools.
Allowed values:noneautorequired
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.
Example:user-1234
One OfDeprecated in favor of tool_choice.
Controls which (if any) function is called by the model.
none means the model will not call a function and instead generates a
message.
auto means the model can pick between generating a message or calling a
function.
Specifying a particular function via {"name": "my_function"} forces the
model to call that function.
none is the default when no functions are present. auto is the default
if functions are present.
none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function.
Allowed values:noneauto
Deprecated in favor of tools.
A list of functions the model may generate JSON inputs for.
>= 1 items<= 128 items