CreateEmbeddingResponse
objectRepresents an embedding vector returned by embedding endpoint.
Show Child Parameters
The name of the model used to generate the embedding.
The object type, which is always “list”.
Allowed values:list
The usage information for the request.
Show Child Parameters
CreateFileRequest
objectThe File object (not file name) to be uploaded.
The intended purpose of the uploaded file.
Use “assistants” for Assistants and Message files, “vision” for Assistants image file inputs, “batch” for Batch API, and “fine-tune” for Fine-tuning.
Allowed values:assistantsbatchfine-tunevision
CreateFineTuningJobRequest
objectAny OfThe name of the model to fine-tune. You can select one of the
supported models.
Example:gpt-4o-mini
The ID of an uploaded file that contains training data.
See upload file for how to upload a file.
Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose fine-tune.
The contents of the file should differ depending on if the model uses the chat, completions format, or if the fine-tuning method uses the preference format.
See the fine-tuning guide for more details.
Example:file-abc123
The hyperparameters used for the fine-tuning job.
This value is now deprecated in favor of method, and should be passed in under the method parameter.
Show Child Parameters
A string of up to 64 characters that will be added to your fine-tuned model name.
For example, a suffix of “custom-model-name” would produce a model name like ft:gpt-4o-mini:openai:custom-model-name:7p4lURel.
Default:null
>= 1 characters<= 64 characters
The ID of an uploaded file that contains validation data.
If you provide this file, the data is used to generate validation
metrics periodically during fine-tuning. These metrics can be viewed in
the fine-tuning results file.
The same data should not be present in both train and validation files.
Your dataset must be formatted as a JSONL file. You must upload your file with the purpose fine-tune.
See the fine-tuning guide for more details.
Example:file-abc123
A list of integrations to enable for your fine-tuning job.
Show Child Parameters
The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases.
If a seed is not specified, one will be generated for you.
>= 0<= 2147483647
Example:42
The method used for fine-tuning.
Show Child Parameters
CreateImageEditRequest
objectThe image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.
A text description of the desired image(s). The maximum length is 1000 characters.
Example:A cute baby sea otter wearing a beret
An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.
Any OfThe model to use for image generation. Only dall-e-2 is supported at this time.
Default:dall-e-2
Example:dall-e-2
The number of images to generate. Must be between 1 and 10.
Default:1
>= 1<= 10
Example:1
The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.
Allowed values:256x256512x5121024x1024
Default:1024x1024
Example:1024x1024
The format in which the generated images are returned. Must be one of url or b64_json. URLs are only valid for 60 minutes after the image has been generated.
Allowed values:urlb64_json
Default:url
Example:url
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.
Example:user-1234
CreateImageRequest
objectA text description of the desired image(s). The maximum length is 1000 characters for dall-e-2 and 4000 characters for dall-e-3.
Example:A cute baby sea otter
Any OfThe model to use for image generation.
Default:dall-e-2
Example:dall-e-3
The number of images to generate. Must be between 1 and 10. For dall-e-3, only n=1 is supported.
Default:1
>= 1<= 10
Example:1
The quality of the image that will be generated. hd creates images with finer details and greater consistency across the image. This param is only supported for dall-e-3.
Allowed values:standardhd
Default:standard
Example:standard
The format in which the generated images are returned. Must be one of url or b64_json. URLs are only valid for 60 minutes after the image has been generated.
Allowed values:urlb64_json
Default:url
Example:url
The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024 for dall-e-2. Must be one of 1024x1024, 1792x1024, or 1024x1792 for dall-e-3 models.
Allowed values:256x256512x5121024x10241792x10241024x1792
Default:1024x1024
Example:1024x1024
The style of the generated images. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for dall-e-3.
Allowed values:vividnatural
Default:vivid
Example:vivid
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.
Example:user-1234