OpenAI API

Creates a completion for the provided prompt and parameters.

post
https://api.openai.com/v1/completions

Body

application/json

CreateCompletionRequest

modelAny Of
required

ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.

Variant 1string
promptOne Of
required

The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.

Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document.

Default:<|endoftext|>

Variant 1string

Default:

Example:This is a test.

best_ofinteger | null

Generates best_of completions server-side and returns the “best” (the one with the highest log probability per token). Results cannot be streamed.

When used with n, best_of controls the number of candidate completions and n specifies how many to return – best_of must be greater than n.

Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.

Default:1

>= 0<= 20

echoboolean | null

Echo back the prompt in addition to the completion

Default:false

frequency_penaltynumber | null

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.

See more information about frequency and presence penalties.

Default:0

>= -2<= 2

logit_biasobject | null

Modify the likelihood of specified tokens appearing in the completion.

Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.

As an example, you can pass {"50256": -100} to prevent the <|endoftext|> token from being generated.

Default:null

logprobsinteger | null

Include the log probabilities on the logprobs most likely output tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.

The maximum value for logprobs is 5.

Default:null

>= 0<= 5

max_tokensinteger | null

The maximum number of tokens that can be generated in the completion.

The token count of your prompt plus max_tokens cannot exceed the model’s context length. Example Python code for counting tokens.

Default:16

>= 0

Example:16

ninteger | null

How many completions to generate for each prompt.

Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.

Default:1

>= 1<= 128

Example:1

presence_penaltynumber | null

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.

See more information about frequency and presence penalties.

Default:0

>= -2<= 2

seedinteger | null(int64)

If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.

Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.

stopOne Of

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

Default:null

Variant 1string | null

Default:<|endoftext|>

Example:

streamboolean | null

Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. Example Python code.

Default:false

stream_optionsobject | null

Options for streaming response. Only set this when you set stream: true.

Default:null

Show Child Parameters
suffixstring | null

The suffix that comes after a completion of inserted text.

This parameter is only supported for gpt-3.5-turbo-instruct.

Default:null

Example:test.

temperaturenumber | null

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

We generally recommend altering this or top_p but not both.

Default:1

>= 0<= 2

Example:1

top_pnumber | null

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

Default:1

>= 0<= 1

Example:1

userstring

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.

Example:user-1234

Response

200 application/json

OK

CreateCompletionResponse

Represents a completion response from the API. Note: both the streamed and non-streamed response objects share the same shape (unlike the chat endpoint).

idstringrequired

A unique identifier for the completion.

choicesarray[object]required

The list of completion choices the model generated for the input prompt.

Show Child Parameters
createdintegerrequired

The Unix timestamp (in seconds) of when the completion was created.

modelstringrequired

The model used for completion.

system_fingerprintstring

This fingerprint represents the backend configuration that the model runs with.

Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.

objectstringrequired

The object type, which is always “text_completion”

Allowed values:text_completion

usageobject

Usage statistics for the completion request.

Show Child Parameters
post/completions

Body

{ "model": {}, "prompt": "This is a test." }
 
200 application/json

Embeddings

Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.

Creates an embedding vector representing the input text.

post
https://api.openai.com/v1/embeddings

Body

application/json

CreateEmbeddingRequest

* Additional properties are NOT allowed.
inputOne Of
required

Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for text-embedding-ada-002), cannot be an empty string, and any array must be 2048 dimensions or less. Example Python code for counting tokens. Some models may also impose a limit on total number of tokens summed across inputs.

Example:The quick brown fox jumped over the lazy dog

stringstring

The string that will be turned into an embedding.

Default:

Example:This is a test.

modelAny Of
required

ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.

Example:text-embedding-3-small

Variant 1string
encoding_formatstring

The format to return the embeddings in. Can be either float or base64.

Allowed values:floatbase64

Default:float

Example:float

dimensionsinteger

The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models.

>= 1

userstring

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.

Example:user-1234

Response

200 application/json

OK

CreateEmbeddingResponse

dataarray[object]required

Represents an embedding vector returned by embedding endpoint.

Show Child Parameters
modelstringrequired

The name of the model used to generate the embedding.

objectstringrequired

The object type, which is always “list”.

Allowed values:list

usageobjectrequired

The usage information for the request.

Show Child Parameters
post/embeddings

Body

{ "input": "This is a test.", "model": {} }
 
200 application/json

Fine-tuning

Manage fine-tuning jobs to tailor a model to your specific training data.

Creates a fine-tuning job which begins the process of creating a new model from a given dataset. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete. [Learn more about fine-tuning](/docs/guides/fine-tuning)

post
https://api.openai.com/v1/fine_tuning/jobs

Body

application/json

CreateFineTuningJobRequest

modelAny Of
required

The name of the model to fine-tune. You can select one of the
supported models.

Example:gpt-4o-mini

Variant 1string
training_filestringrequired

The ID of an uploaded file that contains training data.

See upload file for how to upload a file.

Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose fine-tune.

The contents of the file should differ depending on if the model uses the chat, completions format, or if the fine-tuning method uses the preference format.

See the fine-tuning guide for more details.

Example:file-abc123

hyperparametersobjectDEPRECATED

The hyperparameters used for the fine-tuning job.
This value is now deprecated in favor of method, and should be passed in under the method parameter.

Show Child Parameters
suffixstring | null

A string of up to 64 characters that will be added to your fine-tuned model name.

For example, a suffix of “custom-model-name” would produce a model name like ft:gpt-4o-mini:openai:custom-model-name:7p4lURel.

Default:null

>= 1 characters<= 64 characters

validation_filestring | null

The ID of an uploaded file that contains validation data.

If you provide this file, the data is used to generate validation
metrics periodically during fine-tuning. These metrics can be viewed in
the fine-tuning results file.
The same data should not be present in both train and validation files.

Your dataset must be formatted as a JSONL file. You must upload your file with the purpose fine-tune.

See the fine-tuning guide for more details.

Example:file-abc123

integrationsarray | null[object]

A list of integrations to enable for your fine-tuning job.

Show Child Parameters
seedinteger | null

The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases.
If a seed is not specified, one will be generated for you.

>= 0<= 2147483647

Example:42

methodobject

The method used for fine-tuning.

Show Child Parameters

Response

200 application/json

OK

FineTuningJob

The fine_tuning.job object represents a fine-tuning job that has been created through the API.

idstringrequired

The object identifier, which can be referenced in the API endpoints.

created_atintegerrequired

The Unix timestamp (in seconds) for when the fine-tuning job was created.

errorobject | nullrequired

For fine-tuning jobs that have failed, this will contain more information on the cause of the failure.

Show Child Parameters
fine_tuned_modelstring | nullrequired

The name of the fine-tuned model that is being created. The value will be null if the fine-tuning job is still running.

finished_atinteger | nullrequired

The Unix timestamp (in seconds) for when the fine-tuning job was finished. The value will be null if the fine-tuning job is still running.

hyperparametersobjectrequired

The hyperparameters used for the fine-tuning job. This value will only be returned when running supervised jobs.

Show Child Parameters
modelstringrequired

The base model that is being fine-tuned.

objectstringrequired

The object type, which is always “fine_tuning.job”.

Allowed values:fine_tuning.job

organization_idstringrequired

The organization that owns the fine-tuning job.

result_filesarray[string]required

The compiled results file ID(s) for the fine-tuning job. You can retrieve the results with the Files API.

Example:file-abc123

statusstringrequired

The current status of the fine-tuning job, which can be either validating_files, queued, running, succeeded, failed, or cancelled.

Allowed values:validating_filesqueuedrunningsucceededfailedcancelled

trained_tokensinteger | nullrequired

The total number of billable tokens processed by this fine-tuning job. The value will be null if the fine-tuning job is still running.

training_filestringrequired

The file ID used for training. You can retrieve the training data with the Files API.

validation_filestring | nullrequired

The file ID used for validation. You can retrieve the validation results with the Files API.

integrationsOne Of
array | null

A list of integrations to enable for this fine-tuning job.

<= 5 items

Fine-Tuning Job Integrationobject
Show Child Parameters
seedintegerrequired

The seed used for the fine-tuning job.

estimated_finishinteger | null

The Unix timestamp (in seconds) for when the fine-tuning job is estimated to finish. The value will be null if the fine-tuning job is not running.

methodobject

The method used for fine-tuning.

Show Child Parameters
post/fine_tuning/jobs

Body

{ "model": {}, "training_file": "file-abc123" }
 
200 application/json