OpenAI

sttp.openai.OpenAI
class OpenAI(authToken: String, baseUri: Uri)

Attributes

Graph
Supertypes
class Object
trait Matchable
class Any

Members list

Value members

Concrete methods

def cancelFineTune(fineTuneId: String): Request[Either[OpenAIException, FineTuneResponse]]

Immediately cancel a fine-tune job.

Value parameters

fineTuneId

The ID of the fine-tune job to cancel.

Attributes

def cancelRun(threadId: String, runId: String): Request[Either[OpenAIException, RunData]]

Cancels a run that is in_progress.

Value parameters

runId

The ID of the run to cancel.

threadId

The ID of the thread to which this run belongs.

Attributes

def createAssistant(createAssistantBody: CreateAssistantBody): Request[Either[OpenAIException, AssistantData]]

Create an assistant with a model and instructions.

Create an assistant with a model and instructions.

https://platform.openai.com/docs/api-reference/assistants/createAssistant

Value parameters

createAssistantBody

Create completion request body.

Attributes

def createChatCompletion(chatBody: ChatBody): Request[Either[OpenAIException, ChatResponse]]

Creates a model response for the given chat conversation defined in chatBody.

Creates a model response for the given chat conversation defined in chatBody.

https://platform.openai.com/docs/api-reference/chat/create

Value parameters

chatBody

Chat request body.

Attributes

def createChatCompletionAsBinaryStream[S](s: Streams[S], chatBody: ChatBody): StreamRequest[Either[OpenAIException, s.BinaryStream], S]

Creates a model response for the given chat conversation defined in chatBody.

Creates a model response for the given chat conversation defined in chatBody.

The response is streamed in chunks as server-sent events, which are returned unparsed as a binary stream, using the given streams implementation.

https://platform.openai.com/docs/api-reference/chat/create

Value parameters

chatBody

Chat request body.

s

The streams implementation to use.

Attributes

def createChatCompletionAsInputStream(chatBody: ChatBody): Request[Either[OpenAIException, InputStream]]

Creates a model response for the given chat conversation defined in chatBody.

Creates a model response for the given chat conversation defined in chatBody.

The response is streamed in chunks as server-sent events, which are returned unparsed as a InputStream.

https://platform.openai.com/docs/api-reference/chat/create

Value parameters

chatBody

Chat request body.

Attributes

def createCompletion(completionBody: CompletionsBody): Request[Either[OpenAIException, CompletionsResponse]]

Creates a completion for the provided prompt and parameters given in request body.

Creates a completion for the provided prompt and parameters given in request body.

https://platform.openai.com/docs/api-reference/completions/create

Value parameters

completionBody

Create completion request body.

Attributes

def createEdit(editRequestBody: EditBody): Request[Either[OpenAIException, EditResponse]]

Creates a new edit for provided request body.

Creates a new edit for provided request body.

https://platform.openai.com/docs/api-reference/edits/create

Value parameters

editRequestBody

Edit request body.

Attributes

def createEmbeddings(embeddingsBody: EmbeddingsBody): Request[Either[OpenAIException, EmbeddingResponse]]

Gets info about the fine-tune job.

Value parameters

embeddingsBody

Embeddings request body.

Attributes

def createFineTune(fineTunesRequestBody: FineTunesRequestBody): Request[Either[OpenAIException, FineTuneResponse]]

Creates a job that fine-tunes a specified model from a given dataset.

Creates a job that fine-tunes a specified model from a given dataset.

https://platform.openai.com/docs/api-reference/fine-tunes/create

Value parameters

fineTunesRequestBody

Request body that will be used to create a fine-tune.

Attributes

def createImage(imageCreationBody: ImageCreationBody): Request[Either[OpenAIException, ImageResponse]]

Creates an image given a prompt in request body.

Creates an image given a prompt in request body.

https://platform.openai.com/docs/api-reference/images/create

Value parameters

imageCreationBody

Create image request body.

Attributes

def createModeration(moderationsBody: ModerationsBody): Request[Either[OpenAIException, ModerationData]]

Classifies if text violates OpenAI's Content Policy.

Classifies if text violates OpenAI's Content Policy.

https://platform.openai.com/docs/api-reference/moderations/create

Value parameters

moderationsBody

Moderation request body.

Attributes

def createRun(threadId: String, createRun: CreateRun): Request[Either[OpenAIException, RunData]]

Create a run.

Value parameters

createRun

Create run request body.

threadId

The ID of the thread to run.

Attributes

def createThread(createThreadBody: CreateThreadBody): Request[Either[OpenAIException, ThreadData]]

Create a thread.

Value parameters

createThreadBody

Create completion request body.

Attributes

def createThreadAndRun(createThreadAndRun: CreateThreadAndRun): Request[Either[OpenAIException, RunData]]

Create a thread and run it in one request.

Value parameters

createThreadAndRun

Create thread and run request body.

Attributes

def createThreadMessage(threadId: String, message: CreateMessage): Request[Either[OpenAIException, MessageData]]

Create a message.

Value parameters

threadId

The ID of the thread to create a message for.

Attributes

def createTranscription(file: File, model: RecognitionModel): Request[Either[OpenAIException, AudioResponse]]

Transcribes audio into the input language.

Transcribes audio into the input language.

https://platform.openai.com/docs/api-reference/audio/create

Value parameters

file

The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.

model

ID of the model to use. Only whisper-1 is currently available.

Attributes

def createTranscription(systemPath: String, model: RecognitionModel): Request[Either[OpenAIException, AudioResponse]]

Transcribes audio into the input language.

Transcribes audio into the input language.

https://platform.openai.com/docs/api-reference/audio/create

Value parameters

model

ID of the model to use. Only whisper-1 is currently available.

systemPath

The audio systemPath to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.

Attributes

def createTranscription(transcriptionConfig: TranscriptionConfig): Request[Either[OpenAIException, AudioResponse]]

Transcribes audio into the input language.

Transcribes audio into the input language.

Value parameters

transcriptionConfig

An instance of the case class TranscriptionConfig containing the necessary parameters for the audio transcription

Attributes

Returns

An url to edited image.

def createTranslation(file: File, model: RecognitionModel): Request[Either[OpenAIException, AudioResponse]]

Translates audio into English text.

Value parameters

file

The audio file to translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.

model

ID of the model to use. Only whisper-1 is currently available.

Attributes

def createTranslation(systemPath: String, model: RecognitionModel): Request[Either[OpenAIException, AudioResponse]]

Translates audio into English text.

Value parameters

model

ID of the model to use. Only whisper-1 is currently available.

systemPath

The audio systemPath to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.

Attributes

def createTranslation(translationConfig: TranslationConfig): Request[Either[OpenAIException, AudioResponse]]

Translates audio into English text.

Value parameters

translationConfig

An instance of the case class TranslationConfig containing the necessary parameters for the audio translation.

Attributes

def createVectorStore(createVectorStoreBody: CreateVectorStoreBody): Request[Either[OpenAIException, VectorStore]]

Creates vector store

Creates vector store

Value parameters

createVectorStoreBody

Options for new vector store

Attributes

Returns

Newly created vector store or exception

def createVectorStoreFile(vectorStoreId: String, createVectorStoreFileBody: CreateVectorStoreFileBody): Request[Either[OpenAIException, VectorStoreFile]]

Creates vector store file

Creates vector store file

Value parameters

createVectorStoreFileBody

Properties of file

vectorStoreId

Id of vector store for file

Attributes

Returns

Newly created vector store file

def deleteAssistant(assistantId: String): Request[Either[OpenAIException, DeleteAssistantResponse]]

Delete an assistant.

Value parameters

assistantId

The ID of the assistant to delete.

Attributes

def deleteFile(fileId: String): Request[Either[OpenAIException, DeletedFileData]]

Delete a file.

Value parameters

fileId

The ID of the file to use for this request.

Attributes

def deleteFineTuneModel(model: String): Request[Either[OpenAIException, DeleteFineTuneModelResponse]]

Delete a fine-tuned model. You must have the Owner role in your organization.

Delete a fine-tuned model. You must have the Owner role in your organization.

https://platform.openai.com/docs/api-reference/fine-tunes/delete-model

Value parameters

model

The model to delete.

Attributes

def deleteThread(threadId: String): Request[Either[OpenAIException, DeleteThreadResponse]]

Delete a thread.

Value parameters

threadId

The ID of the thread to delete.

Attributes

def deleteVectorStore(vectorStoreId: String): Request[Either[OpenAIException, DeleteVectorStoreResponse]]

Deletes vector store

Deletes vector store

Value parameters

vectorStoreId

Id of vector store to be deleted

Attributes

Returns

Result of deleted operation

def deleteVectorStoreFile(vectorStoreId: String, fileId: String): Request[Either[OpenAIException, DeleteVectorStoreFileResponse]]

Deletes vector store file by id

Deletes vector store file by id

Value parameters

fileId

Id of vector store file

vectorStoreId

Id of vector store

Attributes

Returns

Result of delete operation

def getFiles: Request[Either[OpenAIException, FilesResponse]]

Returns a list of files that belong to the user's organization.

Returns a list of files that belong to the user's organization.

https://platform.openai.com/docs/api-reference/files

Attributes

def getFineTuneEvents(fineTuneId: String): Request[Either[OpenAIException, FineTuneEventsResponse]]

Get fine-grained status updates for a fine-tune job.

Get fine-grained status updates for a fine-tune job.

https://platform.openai.com/docs/api-reference/fine-tunes/events

Value parameters

fineTuneId

The ID of the fine-tune job to get events for.

Attributes

List of your organization's fine-tuning jobs.

List of your organization's fine-tuning jobs.

https://platform.openai.com/docs/api-reference/fine-tunes/list

Attributes

def getModels: Request[Either[OpenAIException, ModelsResponse]]

Lists the currently available models, and provides basic information about each one such as the owner and availability.

Lists the currently available models, and provides basic information about each one such as the owner and availability.

https://platform.openai.com/docs/api-reference/models

Attributes

def imageEdits(image: File, prompt: String): Request[Either[OpenAIException, ImageResponse]]

Creates edited or extended images given an original image and a prompt.

Creates edited or extended images given an original image and a prompt.

https://platform.openai.com/docs/api-reference/images/create-edit

Value parameters

image

The image to be edited. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.

prompt

A text description of the desired image. The maximum length is 1000 characters.

Attributes

def imageEdits(systemPath: String, prompt: String): Request[Either[OpenAIException, ImageResponse]]

Creates edited or extended images given an original image and a prompt.

Creates edited or extended images given an original image and a prompt.

https://platform.openai.com/docs/api-reference/images/create-edit

Value parameters

prompt

A text description of the desired image. The maximum length is 1000 characters.

systemPath

Path to the image to be edited. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask

Attributes

def imageEdits(imageEditsConfig: ImageEditsConfig): Request[Either[OpenAIException, ImageResponse]]

Creates edited or extended images given an original image and a prompt.

Creates edited or extended images given an original image and a prompt.

https://platform.openai.com/docs/api-reference/images/create-edit

Value parameters

imageEditsConfig

An instance of the case class ImageEditConfig containing the necessary parameters for editing the image.

Attributes

def imageVariations(image: File): Request[Either[OpenAIException, ImageResponse]]

Creates a variation of a given image.

Value parameters

image

The image to use as the basis for the variation. Must be a valid PNG file, less than 4MB, and square.

Attributes

def imageVariations(systemPath: String): Request[Either[OpenAIException, ImageResponse]]

Creates a variation of a given image.

Value parameters

systemPath

Path to the image to use as the basis for the variation. Must be a valid PNG file, less than 4MB, and square.

Attributes

def imageVariations(imageVariationsConfig: ImageVariationsConfig): Request[Either[OpenAIException, ImageResponse]]

Creates a variation of a given image.

Value parameters

imageVariationsConfig

An instance of the case class ImageVariationsConfig containing the necessary parameters for the image variation.

Attributes

def listAssistants(queryParameters: QueryParameters): Request[Either[OpenAIException, ListAssistantsResponse]]

Returns a list of assistants.

def listRunSteps(threadId: String, runId: String, queryParameters: QueryParameters): Request[Either[OpenAIException, ListRunStepsResponse]]

Returns a list of run steps belonging to a run.

Returns a list of run steps belonging to a run.

https://platform.openai.com/docs/api-reference/runs/listRunSteps

Value parameters

runId

The ID of the run the run steps belong to.

threadId

The ID of the thread the run and run steps belong to.

Attributes

def listRuns(threadId: String): Request[Either[OpenAIException, ListRunsResponse]]

Returns a list of runs belonging to a thread..

Returns a list of runs belonging to a thread..

https://platform.openai.com/docs/api-reference/runs/listRuns

Value parameters

threadId

The ID of the thread the run belongs to.

Attributes

def listThreadMessages(threadId: String, queryParameters: QueryParameters): Request[Either[OpenAIException, ListMessagesResponse]]

Returns a list of messages for a given thread.

Returns a list of messages for a given thread.

https://platform.openai.com/docs/api-reference/messages/listMessages

Value parameters

threadId

The ID of the thread the messages belong to.

Attributes

def listVectorStoreFiles(vectorStoreId: String, queryParameters: ListVectorStoreFilesBody): Request[Either[OpenAIException, ListVectorStoreFilesResponse]]

List files belonging to particular datastore

List files belonging to particular datastore

Value parameters

queryParameters

Search params

vectorStoreId

Id of vector store

Attributes

Returns

List of vector store files

Lists vector store

Lists vector store

Value parameters

queryParameters

Search params

Attributes

Returns

List of vector stores matching criteria or exception

def modifyAssistant(assistantId: String, modifyAssistantBody: ModifyAssistantBody): Request[Either[OpenAIException, AssistantData]]

Modifies an assistant.

Value parameters

assistantId

The ID of the assistant to modify.

modifyAssistantBody

Modify assistant request body.

Attributes

def modifyMessage(threadId: String, messageId: String, metadata: Map[String, String]): Request[Either[OpenAIException, MessageData]]

Modifies a message.

Value parameters

messageId

The ID of the message to modify.

threadId

The ID of the thread to which this message belongs.

Attributes

def modifyRun(threadId: String, runId: String, metadata: Map[String, String]): Request[Either[OpenAIException, RunData]]

Modifies a run.

Value parameters

runId

The ID of the run to modify.

threadId

The ID of the thread that was run.

Attributes

def modifyThread(threadId: String, metadata: Map[String, String]): Request[Either[OpenAIException, ThreadData]]

Modifies a thread.

Value parameters

threadId

The ID of the thread to modify. Only the metadata can be modified.

Attributes

def modifyVectorStore(vectorStoreId: String, modifyVectorStoreBody: ModifyVectorStoreBody): Request[Either[OpenAIException, VectorStore]]

Modifies vector store

Modifies vector store

Value parameters

modifyVectorStoreBody

New values for store properties

vectorStoreId

Id of vector store to modify

Attributes

Returns

Modified vector store object

def retrieveAssistant(assistantId: String): Request[Either[OpenAIException, AssistantData]]

Retrieves an assistant.

Value parameters

assistantId

The ID of the assistant to retrieve.

Attributes

def retrieveFile(fileId: String): Request[Either[OpenAIException, FileData]]

Returns information about a specific file.

Returns information about a specific file.

https://platform.openai.com/docs/api-reference/files/retrieve

Value parameters

fileId

The ID of the file to use for this request.

Attributes

def retrieveFileContent(fileId: String): Request[Either[OpenAIException, String]]

Returns the contents of the specified file.

Returns the contents of the specified file.

https://platform.openai.com/docs/api-reference/files/retrieve-content

Value parameters

fileId

The ID of the file.

Attributes

def retrieveFineTune(fineTuneId: String): Request[Either[OpenAIException, FineTuneResponse]]

Gets info about the fine-tune job.

Value parameters

fineTuneId

The ID of the fine-tune job.

Attributes

def retrieveModel(modelId: String): Request[Either[OpenAIException, ModelData]]

Retrieves a model instance, providing basic information about the model such as the owner and permissions.

Retrieves a model instance, providing basic information about the model such as the owner and permissions.

https://platform.openai.com/docs/api-reference/models/retrieve

Value parameters

modelId

The ID of the model to use for this request.

Attributes

def retrieveRun(threadId: String, runId: String): Request[Either[OpenAIException, RunData]]

Retrieves a run.

Value parameters

runId

The ID of the run to retrieve.

threadId

The ID of the thread that was run.

Attributes

def retrieveRunStep(threadId: String, runId: String, stepId: String): Request[Either[OpenAIException, RunStepData]]

Retrieves a run step.

Value parameters

runId

The ID of the run to which the run step belongs.

stepId

The ID of the run step to retrieve.

threadId

The ID of the thread to which the run and run step belongs.

Attributes

def retrieveThread(threadId: String): Request[Either[OpenAIException, ThreadData]]

Retrieves a thread.

Value parameters

threadId

The ID of the thread to retrieve.

Attributes

def retrieveThreadMessage(threadId: String, messageId: String): Request[Either[OpenAIException, MessageData]]

Retrieve a message.

Value parameters

messageId

The ID of the message to retrieve.

threadId

The ID of the thread to which this message belongs.

Attributes

def retrieveVectorStore(vectorStoreId: String): Request[Either[OpenAIException, VectorStore]]

Retrieves vector store by id

Retrieves vector store by id

Value parameters

vectorStoreId

Id of vector store

Attributes

Returns

Vector store object or exception

def retrieveVectorStoreFile(vectorStoreId: String, fileId: String): Request[Either[OpenAIException, VectorStoreFile]]

Retrieves vector store file by id

Retrieves vector store file by id

Value parameters

fileId

Id of vector store file

vectorStoreId

Id of vector store

Attributes

Returns

Vector store file

def submitToolOutputs(threadId: String, runId: String, toolOutputs: Seq[ToolOutput]): Request[Either[OpenAIException, RunData]]

When a run has the status: "requires_action" and required_action.type is submit_tool_outputs, this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request.

When a run has the status: "requires_action" and required_action.type is submit_tool_outputs, this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request.

https://platform.openai.com/docs/api-reference/runs/submitToolOutputs

Value parameters

runId

The ID of the run that requires the tool output submission.

threadId

The ID of the thread to which this run belongs.

toolOutputs

A list of tools for which the outputs are being submitted.

Attributes

def uploadFile(file: File, purpose: String): Request[Either[OpenAIException, FileData]]

Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact OpenAI if you need to increase the storage limit.

Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact OpenAI if you need to increase the storage limit.

https://platform.openai.com/docs/api-reference/files/upload

Value parameters

file

JSON Lines file to be uploaded. If the purpose is set to "fine-tune", each line is a JSON record with "prompt" and "completion" fields representing your training examples.

purpose

The intended purpose of the uploaded documents. Use "fine-tune" for Fine-tuning. This allows OpenAI to validate the format of the uploaded file.

Attributes

def uploadFile(file: File): Request[Either[OpenAIException, FileData]]

Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact OpenAI if you need to increase the storage limit.

Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact OpenAI if you need to increase the storage limit.

https://platform.openai.com/docs/api-reference/files/upload

Value parameters

file

JSON Lines file to be uploaded and the purpose is set to "fine-tune", each line is a JSON record with "prompt" and "completion" fields representing your training examples.

Attributes

def uploadFile(systemPath: String, purpose: String): Request[Either[OpenAIException, FileData]]

Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact OpenAI if you need to increase the storage limit.

Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact OpenAI if you need to increase the storage limit.

https://platform.openai.com/docs/api-reference/files/upload

Value parameters

purpose

The intended purpose of the uploaded documents. Use "fine-tune" for Fine-tuning. This allows OpenAI to validate the format of the uploaded file.

systemPath

Path to the JSON Lines file to be uploaded. If the purpose is set to "fine-tune", each line is a JSON record with "prompt" and "completion" fields representing your training examples.

Attributes

def uploadFile(systemPath: String): Request[Either[OpenAIException, FileData]]

Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact OpenAI if you need to increase the storage limit.

Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact OpenAI if you need to increase the storage limit.

https://platform.openai.com/docs/api-reference/files/upload

Value parameters

systemPath

Path to the JSON Lines file to be uploaded and the purpose is set to "fine-tune", each line is a JSON record with "prompt" and "completion" fields representing your training examples.

Attributes