ChatCompletionStreamResponseDelta
objectA chat completion delta generated by streamed model responses.
The contents of the chunk message.
Deprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.
Show Child Parameters
Show Child Parameters
The role of the author of this message.
Allowed values:developersystemuserassistanttool
The refusal message generated by the model.
ChatCompletionTokenLogprob
objectThe token.
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.
List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.
Show Child Parameters
ChatCompletionTool
objectThe type of the tool. Currently, only function is supported.
Allowed values:function
Show Child Parameters
ChatCompletionToolChoiceOption
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or more tools.
required means the model must call one or more tools.
Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
none is the default when no tools are present. auto is the default if tools are present.
One OfChunkingStrategyRequestParam
objectThe chunking strategy used to chunk the file(s). If not set, will use the auto strategy.
One OfAlways auto.
Allowed values:auto