OpenAIService

io.cequence.openaiscala.service.OpenAIService

Central service to access all public OpenAI WS endpoints as defined at the API ref. page

The following services are supported:

  • '''Models''': listModels, and retrieveModel
  • '''Completions''': createCompletion
  • '''Chat Completions''': createChatCompletion, createChatFunCompletion (deprecated), and createChatToolCompletion
  • '''Edits''': createEdit (deprecated)
  • '''Images''': createImage, createImageEdit, createImageVariation
  • '''Embeddings''': createEmbeddings
  • '''Batches''': createBatch, retrieveBatch, cancelBatch, and listBatches
  • '''Audio''': createAudioTranscription, createAudioTranslation, and createAudioSpeech
  • '''Files''': listFiles, uploadFile, deleteFile, retrieveFile, and retrieveFileContent
  • '''Fine-tunes''': createFineTune, listFineTunes, retrieveFineTune, cancelFineTune, listFineTuneEvents, listFineTuneCheckpoints, and deleteFineTuneModel
  • '''Moderations''': createModeration
  • '''Threads''': createThread, retrieveThread, modifyThread, and deleteThread
  • '''Thread Messages''': createThreadMessage, retrieveThreadMessage, modifyThreadMessage, listThreadMessages, retrieveThreadMessageFile, and listThreadMessageFiles
  • '''Runs''': createRun, etc.
  • '''Run Steps''': listRunSteps, etc.
  • '''Vector Stores''': createVectorStore, modifyVectorStore, listVectorStores, retrieveVectorStore, deleteVectorStore etc.
  • '''Vector Store Files''': createVectorStoreFile, listVectorStoreFiles, retrieveVectorStoreFile, deleteVectorStoreFile etc.
  • '''Vector Store File Batches''': TODO etc.
  • '''Assistants''': createAssistant, listAssistants, retrieveAssistant, modifyAssistant, and deleteAssistant
  • '''Assistant Files''': createAssistantFile, listAssistantFiles, retrieveAssistantFile, and deleteAssistantFile

Attributes

Since:

Sep 2024

Graph
Supertypes
trait CloseableService
class Object
trait Matchable
class Any
Known subtypes

Members list

Concise view

Type members

Inherited classlikes

Attributes

Inherited from:
OpenAIServiceConsts
Graph
Supertypes
class Object
trait Matchable
class Any

Value members

Abstract methods

def buildAndUploadBatchFile(model: String, requests: Seq[BatchRowBase], displayFileName: Option[String]): Future[FileInfo]

Builds a temporary file from requests and uploads it.

Builds a temporary file from requests and uploads it.

Attributes

displayFileName

(Explicit) display file name; if not specified a full path is used instead.

model

model to be used for the requests of this batch

requests

requests to be batch-processed

def buildBatchFileContent(model: String, requests: Seq[BatchRowBase]): Future[Seq[BatchRow]]

Example output corresponds to a JSON like this:

Example output corresponds to a JSON like this:

[
 {
   "custom_id": "request-1",
   "method": "POST",
   "url": "/v1/chat/completions",
   "body": {
     "model": "gpt-3.5-turbo",
     "messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is 2+2?"}]
   }
 }
]

Attributes

def cancelBatch(batchId: String): Future[Option[Batch]]

Cancels an in-progress batch.

Cancels an in-progress batch.

Attributes

batchId

The ID of the batch to cancel. This should be the unique identifier for the in-progress batch.

Returns:

Future[Option[Batch]] A future that resolves to an Option containing the Batch object after cancellation. Returns None if the batch with the specified ID does not exist or if it is not in-progress. OpenAI Doc

def cancelFineTune(fineTuneId: String): Future[Option[FineTuneJob]]

Immediately cancel a fine-tune job.

Immediately cancel a fine-tune job.

Attributes

fineTuneId

The ID of the fine-tune job to cancel

Returns:

fine tune info or None if not found

See also:
def cancelRun(threadId: String, runId: String): Future[Run]

Cancels a run that is in_progress

Cancels a run that is in_progress

Attributes

runId

The ID of the run to cancel.

threadId

The ID of the thread to which this run belongs.

Returns:

The modified run object matching the specified ID.

def createAssistant(model: String, name: Option[String], description: Option[String], instructions: Option[String], tools: Seq[AssistantTool], toolResources: Option[AssistantToolResource], metadata: Map[String, String]): Future[Assistant]

Create an assistant with a model and instructions.

Create an assistant with a model and instructions.

Attributes

description

The description of the assistant. The maximum length is 512 characters.

instructions

The system instructions that the assistant uses. The maximum length is 32768 characters.

metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

model

The ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.

name

The name of the assistant. The maximum length is 256 characters.

toolResources

A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.

tools

A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, retrieval, or function.

See also:
def createAudioSpeech(input: String, settings: CreateSpeechSettings): Future[Source[ByteString, _]]

Generates audio from the input text.

Generates audio from the input text.

Attributes

input

The text to generate audio for. The maximum length is 4096 characters.

Returns:

The audio file content.

See also:
def createAudioTranscription(file: File, prompt: Option[String], settings: CreateTranscriptionSettings): Future[TranscriptResponse]

Transcribes audio into the input language.

Transcribes audio into the input language.

Attributes

file

The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.

prompt

An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.

Returns:

transcription text

See also:
def createAudioTranslation(file: File, prompt: Option[String], settings: CreateTranslationSettings): Future[TranscriptResponse]

Translates audio into into English.

Translates audio into into English.

Attributes

file

The audio file to translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.

prompt

An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.

Returns:

translation text

See also:
def createBatch(inputFileId: String, endpoint: BatchEndpoint, completionWindow: CompletionWindow, metadata: Map[String, String]): Future[Batch]

Creates and executes a batch from an uploaded file of requests.

Creates and executes a batch from an uploaded file of requests.

Attributes

completionWindow

The time frame within which the batch should be processed. Currently only TwentyFourHours is supported.

endpoint

The endpoint to be used for all requests in the batch. Supported values are ChatCompletions and Embeddings.

inputFileId

The ID of an uploaded file that contains requests for the new batch. The input file must be formatted as a JSONL file, and must be uploaded with the purpose "batch".

metadata

Optional custom metadata for the batch.

Returns:

Future[Batch] A future that resolves to a Batch object containing details about the created batch. OpenAI Doc

def createChatToolCompletion(messages: Seq[BaseMessage], tools: Seq[ChatCompletionTool], responseToolChoice: Option[String], settings: CreateChatCompletionSettings): Future[ChatToolCompletionResponse]

Creates a model response for the given chat conversation expecting a tool call.

Creates a model response for the given chat conversation expecting a tool call.

Attributes

messages

A list of messages comprising the conversation so far.

responseToolChoice

Controls which (if any) function/tool is called by the model. Specifying a particular function forces the model to call that function (must be listed in tools). Otherwise, the default "auto" mode is used where the model can pick between generating a message or calling a function.

tools

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.

Returns:

chat completion response

See also:
def createFineTune(training_file: String, validation_file: Option[String], settings: CreateFineTuneSettings): Future[FineTuneJob]

Creates a job that fine-tunes a specified model from a given dataset. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.

Creates a job that fine-tunes a specified model from a given dataset. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.

Attributes

training_file

The ID of an uploaded file that contains training data. See uploadFile for how to upload a file. Your dataset must be formatted as a JSONL file, where each training example is a JSON object with the keys "prompt" and "completion". Additionally, you must upload your file with the purpose fine-tune.

validation_file

The ID of an uploaded file that contains validation data. If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. Your train and validation data should be mutually exclusive. Your dataset must be formatted as a JSONL file, where each validation example is a JSON object with the keys "prompt" and "completion". Additionally, you must upload your file with the purpose fine-tune.

Returns:

fine tune response

See also:
def createImage(prompt: String, settings: CreateImageSettings): Future[ImageInfo]

Creates an image given a prompt.

Creates an image given a prompt.

Attributes

prompt

A text description of the desired image(s). The maximum length is 1000 characters for dall-e-2 and 4000 characters for dall-e-3.

Returns:

image response (might contain multiple data items - one per image)

See also:
def createImageEdit(prompt: String, image: File, mask: Option[File], settings: CreateImageEditSettings): Future[ImageInfo]

Creates an edited or extended image given an original image and a prompt.

Creates an edited or extended image given an original image and a prompt.

Attributes

image

The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.

mask

An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.

prompt

A text description of the desired image(s). The maximum length is 1000 characters.

Returns:

image response (might contain multiple data items - one per image)

See also:
def createImageVariation(image: File, settings: CreateImageEditSettings): Future[ImageInfo]

Creates a variation of a given image.

Creates a variation of a given image.

Attributes

image

The image to use as the basis for the variation(s). Must be a valid PNG file, less than 4MB, and square.

Returns:

image response (might contain multiple data items - one per image)

See also:
def createModeration(input: String, settings: CreateModerationSettings): Future[ModerationResponse]

Classifies if text violates OpenAI's Content Policy.

Classifies if text violates OpenAI's Content Policy.

Attributes

input

The input text to classify

Returns:

moderation results

See also:
def createRun(threadId: String, assistantId: String, instructions: Option[String], additionalInstructions: Option[String], additionalMessages: Seq[BaseMessage], tools: Seq[AssistantTool], responseToolChoice: Option[ToolChoice], settings: CreateRunSettings, stream: Boolean): Future[Run]

Creates a run for a specified thread using the given assistant.

Creates a run for a specified thread using the given assistant.

Attributes

additionalInstructions

Optional. Appends additional instructions at the end of the instructions for the run. This is useful for modifying the behavior on a per-run basis without overriding other instructions.

additionalMessages

Optional. Adds additional messages to the thread before creating the run.

assistantId

The ID of the assistant to use to execute this run.

instructions

Optional. Overrides the instructions of the assistant. This is useful for modifying the behavior on a per-run basis.

responseToolChoice

Optional. Controls which (if any) tool is called by the model. Can be "none", "auto", "required", or a specific tool.

settings

Optional. Settings for creating the run, such as model, temperature, top_p, etc.

stream

Optional. If true, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a data: [DONE] message.

threadId

The ID of the thread to run.

tools

Optional. Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis.

Returns:

Future[Run] A future that resolves to a Run object.

See also:
def createThread(messages: Seq[ThreadMessage], toolResources: Seq[AssistantToolResource], metadata: Map[String, String]): Future[Thread]

Creates a thread.

Creates a thread.

Attributes

messages

A list of messages to start the thread with.

metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

toolResources

A set of resources that are made available to the assistant's tools in this thread. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.

Returns:

A thread object.

See also:
def createThreadAndRun(assistantId: String, thread: Option[ThreadAndRun], instructions: Option[String], tools: Seq[AssistantTool], toolResources: Option[ThreadAndRunToolResource], toolChoice: Option[ToolChoice], settings: CreateThreadAndRunSettings, stream: Boolean): Future[Run]

Attributes

assistantId

The ID of the assistant to use to execute this run.

instructions

Override the default system message of the assistant. This is useful for modifying the behavior on a per-run basis.

stream

If true, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a data: [DONE] message.

thread

The ID of the thread to run.

toolChoice

Controls which (if any) tool is called by the model. none means the model will not call any tools and instead generates a message. auto is the default value and means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools before responding to the user. Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.

toolResources

A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.

tools

Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis.

def createThreadMessage(threadId: String, content: String, role: ChatRole, attachments: Seq[Attachment], metadata: Map[String, String]): Future[ThreadFullMessage]

Creates a thread message.

Creates a thread message.

Attributes

attachments

A list of files attached to the message, and the tools they should be added to.

content

The content of the message.

metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.

role

The role of the entity that is creating the message. Currently only user is supported.

threadId

The ID of the thread to create a message for.

Returns:

A thread message object.

See also:
def createVectorStore(fileIds: Seq[String], name: Option[String], metadata: Map[String, Any]): Future[VectorStore]

Create a vector store.

Create a vector store.

Attributes

fileIds

A list of File IDs that the vector store should use (optional). Useful for tools like file_search that can access files.

metadata

The expiration policy for a vector store. TODO maximum of 64 characters long and values can be a maximum of 512 characters long.

name

The name of the vector store.

See also:
def createVectorStoreFile(vectorStoreId: String, fileId: String, chunkingStrategy: ChunkingStrategy): Future[VectorStoreFile]

Creates a vector store file.

Creates a vector store file.

Attributes

chunkingStrategy

The chunking strategy to use for this request

fileId

The ID of the file to use for this request

vectorStoreId

The ID of the vector store to use for this request

Returns:

vector store file

See also:
def deleteAssistant(assistantId: String): Future[DeleteResponse]

Delete an assistant.

Delete an assistant.

Attributes

assistantId

The ID of the assistant to delete. OpenAI Doc

def deleteAssistantFile(assistantId: String, fileId: String): Future[DeleteResponse]

Delete an assistant file.

Delete an assistant file.

Attributes

assistantId

The ID of the assistant that the file belongs to.

fileId

The ID of the file to delete. OpenAI Doc

def deleteFile(fileId: String): Future[DeleteResponse]

Delete a file.

Delete a file.

Attributes

fileId

The ID of the file to use for this request

Returns:

enum indicating whether the file was deleted

See also:
def deleteFineTuneModel(modelId: String): Future[DeleteResponse]

Delete a fine-tuned model. You must have the Owner role in your organization.

Delete a fine-tuned model. You must have the Owner role in your organization.

Attributes

modelId

The ID of the file to use for this request

Returns:

enum indicating whether the model was deleted

See also:
def deleteThread(threadId: String): Future[DeleteResponse]

Deletes a thread.

Deletes a thread.

Attributes

threadId

TThe ID of the thread to delete.

Returns:

Deletion status

See also:
def deleteThreadMessage(threadId: String, messageId: String): Future[DeleteResponse]

Deletes a thread message.

Deletes a thread message.

Attributes

messageId

The ID of the message to delete.

threadId

The ID of the thread to which this message belongs.

Returns:

Deletion status.

See also:
def deleteVectorStore(vectorStoreId: String): Future[DeleteResponse]

Deletes a vector store.

Deletes a vector store.

Attributes

vectorStoreId

The ID of the vector store to use for this request

Returns:

enum indicating whether the vector store was deleted

See also:
def deleteVectorStoreFile(vectorStoreId: String, fileId: String): Future[DeleteResponse]

Deletes a vector store file.

Deletes a vector store file.

Attributes

fileId

The ID of the file to use for this request

vectorStoreId

The ID of the vector store to use for this request

Returns:

enum indicating whether the vector store file was deleted

See also:
def listAssistants(pagination: Pagination, order: Option[SortOrder]): Future[Seq[Assistant]]

Returns a list of assistants.

Returns a list of assistants.

Attributes

after

A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.

before

A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include `before=obj_foo`` in order to fetch the previous page of the list.

limit

A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.

order

Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.

See also:
def listBatches(pagination: Pagination, order: Option[SortOrder]): Future[Seq[Batch]]

Lists all batches that belong to the user's organization.

Lists all batches that belong to the user's organization.

Attributes

order

Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.

pagination
  • limit - A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.
  • after - A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.
  • before - A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.
See also:
def listFiles: Future[Seq[FileInfo]]

Returns a list of files that belong to the user's organization.

Returns a list of files that belong to the user's organization.

Attributes

Returns:

file infos

See also:
def listFineTuneCheckpoints(fineTuneId: String, after: Option[String], limit: Option[Int]): Future[Option[Seq[FineTuneCheckpoint]]]

List checkpoints for a fine-tuning job.

List checkpoints for a fine-tuning job.

Attributes

after

Identifier for the last checkpoint ID from the previous pagination request.

fineTuneId

The ID of the fine-tune job to get checkpoints for.

limit

Number of checkpoints to retrieve.

Returns:

A list of fine-tuning checkpoint objects for a fine-tuning job.

See also:
def listFineTuneEvents(fineTuneId: String, after: Option[String], limit: Option[Int]): Future[Option[Seq[FineTuneEvent]]]

Get fine-grained status updates for a fine-tune job.

Get fine-grained status updates for a fine-tune job.

Attributes

fineTuneId

The ID of the fine-tune job to get events for.

Returns:

fine tune events or None if not found

See also:
def listFineTunes(after: Option[String], limit: Option[Int]): Future[Seq[FineTuneJob]]

List your organization's fine-tuning jobs.

List your organization's fine-tuning jobs.

Attributes

after

Identifier for the last job from the previous pagination request.

limit

Number of fine-tuning jobs to retrieve.

Returns:

fine tunes

See also:
def listRunSteps(threadId: String, runId: String, pagination: Pagination, order: Option[SortOrder]): Future[Seq[RunStep]]

Returns a list of run steps belonging to a run. Returns a list of run steps belonging to a run.

Returns a list of run steps belonging to a run. Returns a list of run steps belonging to a run.

Attributes

order

Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.

runId

The ID of the run the run steps belong to.

threadId

The ID of the thread the run and run step belongs to.

Returns:

A list of run step objects.

def listRuns(threadId: String, pagination: Pagination, order: Option[SortOrder]): Future[Seq[Run]]

Returns a list of runs belonging to a thread.

Returns a list of runs belonging to a thread.

Attributes

order

Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.

threadId

The ID of the thread the run belongs to.

Returns:

A list of run objects.

def listThreadMessageFiles(threadId: String, messageId: String, pagination: Pagination, order: Option[SortOrder]): Future[Seq[ThreadMessageFile]]

Returns a list of message files.

Returns a list of message files.

Attributes

after

A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.

before

A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.

limit

A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. Defaults to 20

messageId

TThe ID of the message the file belongs to.

order

Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order. Defaults to desc

threadId

The ID of the thread that the message and files belong to.

Returns:

thread message files

See also:
def listThreadMessages(threadId: String, pagination: Pagination, order: Option[SortOrder]): Future[Seq[ThreadFullMessage]]

Returns a list of messages for a given thread.

Returns a list of messages for a given thread.

Attributes

after

A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.

before

A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.

limit

A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. Defaults to 20

order

Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order. Defaults to desc

threadId

The ID of the thread the messages belong to.

Returns:

thread messages

See also:
def listVectorStoreFiles(vectorStoreId: String, pagination: Pagination, order: Option[SortOrder], filter: Option[VectorStoreFileStatus]): Future[Seq[VectorStoreFile]]

Returns a list of vector store files.

Returns a list of vector store files.

Attributes

filter

Filter by the status of the vector store file. Defaults to None

order

Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order. Defaults to desc

pagination

A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. Defaults to 20

vectorStoreId

The ID of the vector store to use for this request

Returns:

vector store files

See also:
def listVectorStores(pagination: Pagination, order: Option[SortOrder]): Future[Seq[VectorStore]]

Returns a list of vector stores. the default is 20. Defaults to 20 for descending order. Defaults to desc obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.

Returns a list of vector stores. the default is 20. Defaults to 20 for descending order. Defaults to desc obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.

Attributes

Returns:

thread messages

See also:
def modifyAssistant(assistantId: String, model: Option[String], name: Option[String], description: Option[String], instructions: Option[String], tools: Seq[AssistantTool], fileIds: Seq[String], metadata: Map[String, String]): Future[Option[Assistant]]

Modifies an assistant.

Modifies an assistant.

Attributes

description

The description of the assistant. The maximum length is 512 characters.

fileIds

A list of File IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order. If a file was previously attached to the list but does not show up in the list, it will be deleted from the assistant.

instructions

The system instructions that the assistant uses. The maximum length is 32768 characters.

metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long. OpenAI Doc

model

ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.

name

The name of the assistant. The maximum length is 256 characters.

tools

A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, retrieval, or function.

def modifyRun(threadId: String, runId: String, metadata: Map[String, String]): Future[Run]

Modifies a run.

Modifies a run.

Attributes

metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.

runId

The ID of the run to modify.

threadId

The ID of the thread that was run.

Returns:

The modified run object matching the specified ID.

def modifyThread(threadId: String, metadata: Map[String, String]): Future[Option[Thread]]

Modifies a thread.

Modifies a thread.

Attributes

metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

threadId

The ID of the thread to modify. Only the metadata can be modified.

Returns:

The modified thread object matching the specified ID.

See also:
def modifyThreadMessage(threadId: String, messageId: String, metadata: Map[String, String]): Future[Option[ThreadFullMessage]]

Modifies a thread message.

Modifies a thread message.

Attributes

messageId

The ID of the message to modify.

metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.

threadId

The ID of the thread to which this message belongs.

Returns:

The modified message object.

See also:
def modifyVectorStore(vectorStoreId: String, name: Option[String], metadata: Map[String, Any]): Future[VectorStore]

Modifies a vector store.

Modifies a vector store.

Attributes

metadata

A map of metadata to update (optional).

name

The new name of the vector store (optional).

vectorStoreId

The ID of the vector store to modify.

Returns:

A Future containing the modified VectorStore.

See also:
def retrieveAssistant(assistantId: String): Future[Option[Assistant]]

Retrieves an assistant.

Retrieves an assistant.

Attributes

assistantId

The ID of the assistant to retrieve. OpenAI Doc

def retrieveBatch(batchId: String): Future[Option[Batch]]

Retrieves a batch using its ID.

Retrieves a batch using its ID.

Attributes

batchId

The ID of the batch to retrieve. This is a unique identifier for the batch.

Returns:

Future[Option[Batch] A future that resolves to an Option containing the Batch object. Returns None if the batch with the specified ID does not exist. OpenAI Doc

def retrieveBatchFile(batchId: String): Future[Option[FileInfo]]

Retrieves an output batch file using the ID of the batch it belongs to.

Retrieves an output batch file using the ID of the batch it belongs to.

Attributes

batchId

The ID of the output batch file to retrieve. This is a unique identifier for the batch.

Returns:

Future[Option[FileInfo] A future that resolves to an Option containing the FileInfo object. Returns None if the batch with the specified ID does not exist. OpenAI Doc

def retrieveBatchFileContent(batchId: String): Future[Option[String]]

Retrieves content of output batch file using the ID of the batch it belongs to.

Retrieves content of output batch file using the ID of the batch it belongs to.

Attributes

batchId

The ID of the batch whose output file's content is to be retrieved.

Returns:

Future[Option[String] A future that resolves to an Option containing the String object. Returns None if the batch with the specified ID does not exist. OpenAI Doc

def retrieveBatchResponses(batchId: String): Future[Option[CreateBatchResponses]]

Retrieves OpenAI endpoint responses (for chat completion or embeddings, see: BatchEndpoint) using the ID of the batch they belong to.

Retrieves OpenAI endpoint responses (for chat completion or embeddings, see: BatchEndpoint) using the ID of the batch they belong to.

Attributes

batchId

The ID of the batch whose endpoint responses are to be retrieved.

Returns:

Future[Option[CreateBatchResponses] A future that resolves to an Option containing the CreateBatchResponses object. Returns None if the batch with the specified ID does not exist. OpenAI Doc

def retrieveFile(fileId: String): Future[Option[FileInfo]]

Returns information about a specific file.

Returns information about a specific file.

Attributes

fileId

The ID of the file to use for this request

Returns:

file info or None if not found

See also:
def retrieveFileContent(fileId: String): Future[Option[String]]

Returns the contents of the specified file.

Returns the contents of the specified file.

Attributes

fileId

The ID of the file to use for this request

Returns:

file content or None if not found

See also:
def retrieveFileContentAsSource(fileId: String): Future[Option[Source[ByteString, _]]]

Returns the contents of the specified file as an Akka source.

Returns the contents of the specified file as an Akka source.

Attributes

fileId

The ID of the file to use for this request

Returns:

file content or None if not found

See also:
def retrieveFineTune(fineTuneId: String): Future[Option[FineTuneJob]]

Gets info about the fine-tune job.

Gets info about the fine-tune job.

Attributes

fineTuneId

The ID of the fine-tune job

Returns:

fine tune info

See also:
def retrieveModel(modelId: String): Future[Option[ModelInfo]]

Retrieves a model instance, providing basic information about the model such as the owner and permissions.

Retrieves a model instance, providing basic information about the model such as the owner and permissions.

Attributes

modelId

The ID of the model to use for this request

Returns:

model or None if not found

See also:
def retrieveRun(threadId: String, runId: String): Future[Option[Run]]
def retrieveRunStep(threadID: String, runId: String, stepId: String): Future[Option[RunStep]]

Retrieves a run step.

Retrieves a run step.

Attributes

runId

The ID of the run to which the run step belongs.

stepId

The ID of the run step to retrieve.

threadID

The ID of the thread to which the run and run step belongs.

Returns:

The run step object matching the specified ID.

def retrieveThread(threadId: String): Future[Option[Thread]]

Retrieves a thread.

Retrieves a thread.

Attributes

threadId

The ID of the thread to retrieve.

Returns:

The thread object matching the specified ID.

See also:
def retrieveThreadMessage(threadId: String, messageId: String): Future[Option[ThreadFullMessage]]

Retrieves a thread message.

Retrieves a thread message.

Attributes

messageId

The ID of the message to retrieve.

threadId

The ID of the thread to which this message belongs.

Returns:

The message object matching the specified ID.

See also:
def retrieveThreadMessageFile(threadId: String, messageId: String, fileId: String): Future[Option[ThreadMessageFile]]

Retrieves a thread message file.

Retrieves a thread message file.

Attributes

fileId

The ID of the file being retrieved.

messageId

The ID of the message the file belongs to.

threadId

The ID of the thread to which the message and File belong.

Returns:

The thread message file object.

See also:
def retrieveVectorStore(vectorStoreId: String): Future[Option[VectorStore]]

Retrieves a vector store.

Retrieves a vector store.

Attributes

vectorStoreId

The ID of the vector store to retrieve.

Returns:

A Future containing an Option of VectorStore. The Option will be None if the vector store is not found.

See also:
def retrieveVectorStoreFile(vectorStoreId: String, fileId: FileId): Future[VectorStoreFile]

Retrieves a vector store file.

Retrieves a vector store file.

Attributes

fileId

The ID of the file to retrieve.

vectorStoreId

The ID of the vector store to which the file belongs.

Returns:

A Future containing an Option of VectorStoreFile. The Option will be None if the file is not found.

See also:
def submitToolOutputs(threadId: String, runId: String, toolOutputs: Seq[AssistantToolOutput], stream: Boolean): Future[Run]

When a run has the status: "requires_action" and required_action.type is submit_tool_outputs, this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request.

When a run has the status: "requires_action" and required_action.type is submit_tool_outputs, this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request.

Attributes

runId

The ID of the run that requires the tool output submission.

stream

If true, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a data: [DONE] message.

threadId

The ID of the thread to which this run belongs.

toolOutputs

A list of tools for which the outputs are being submitted.

Returns:

The modified run object matching the specified ID.

def uploadBatchFile(file: File, displayFileName: Option[String]): Future[FileInfo]

Upload a file that contains requests to be batch-processed. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact us if you need to increase the storage limit.

Upload a file that contains requests to be batch-processed. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact us if you need to increase the storage limit.

Attributes

displayFileName

(Explicit) display file name; if not specified a full path is used instead.

file

JSON Lines file to be uploaded. Each line is a JSON record with:

  • "custom_id" field - request identifier used to match batch requests with their responses
  • "method" field - HTTP method to be used for the request (currently only POST is supported)
  • "url" field - OpenAI API relative URL to be used for the request (currently /v1/chat/completions and /v1/embeddings are supported)
  • "body" field - JSON record with model and messages fields that will be passed to the specified endpoint batch examples.
Returns:

file info

See also:
def uploadFile(file: File, displayFileName: Option[String], purpose: FileUploadPurpose): Future[FileInfo]

Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact us if you need to increase the storage limit.

Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact us if you need to increase the storage limit.

Attributes

displayFileName

(Explicit) display file name; if not specified a full path is used instead.

file

Name of the JSON Lines file to be uploaded. If the purpose is set to "fine-tune", each line is a JSON record with "prompt" and "completion" fields representing your training examples.

Returns:

file info

See also:

Deprecated methods

@Deprecated
def createChatFunCompletion(messages: Seq[BaseMessage], functions: Seq[ChatCompletionTool], responseFunctionName: Option[String], settings: CreateChatCompletionSettings): Future[ChatFunCompletionResponse]

Creates a model response for the given chat conversation expecting a function call.

Creates a model response for the given chat conversation expecting a function call.

Attributes

functions

A list of functions the model may generate JSON inputs for.

messages

A list of messages comprising the conversation so far.

responseFunctionName

If specified it forces the model to respond with a call to that function (must be listed in functions). Otherwise, the default "auto" mode is used where the model can pick between an end-user or calling a function.

Returns:

chat completion response

See also:

OpenAI Doc Deprecated: use [[OpenAIService.createChatToolCompletion]] instead.

Deprecated
true
@Deprecated
def createEdit(input: String, instruction: String, settings: CreateEditSettings): Future[TextEditResponse]

Creates a new edit for the provided input, instruction, and parameters.

Creates a new edit for the provided input, instruction, and parameters.

Attributes

input

The input text to use as a starting point for the edit.

instruction

The instruction that tells the model how to edit the prompt.

Returns:

text edit response

See also:
Deprecated
true

Inherited methods

def close(): Unit

Closes the underlying ws client, and releases all its resources.

Closes the underlying ws client, and releases all its resources.

Attributes

Inherited from:
CloseableService

Creates a model response for the given chat conversation. Note that this is defined already in OpenAIChatCompletionService, but it is repeated here for clarity.

Creates a model response for the given chat conversation. Note that this is defined already in OpenAIChatCompletionService, but it is repeated here for clarity.

Attributes

messages

A list of messages comprising the conversation so far.

Returns:

chat completion response

See also:
Inherited from:
OpenAICoreService
def createCompletion(prompt: String, settings: CreateCompletionSettings): Future[TextCompletionResponse]

Creates a completion for the provided prompt and parameters. Note that this is defined * already in OpenAICompletionService, but it is repeated here for clarity.

Creates a completion for the provided prompt and parameters. Note that this is defined * already in OpenAICompletionService, but it is repeated here for clarity.

Attributes

prompt

The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays. Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document.

Returns:

text completion response

See also:
Inherited from:
OpenAICoreService
def createEmbeddings(input: Seq[String], settings: CreateEmbeddingsSettings): Future[EmbeddingResponse]

Creates an embedding vector representing the input text.

Creates an embedding vector representing the input text.

Attributes

input

Input text to get embeddings for, encoded as a string or array of tokens. To get embeddings for multiple inputs in a single request, pass an array of strings or array of token arrays. Each input must not exceed 8192 tokens in length.

Returns:

list of embeddings inside an envelope

See also:
Inherited from:
OpenAICoreService
def listModels: Future[Seq[ModelInfo]]

Lists the currently available models, and provides basic information about each one such as the owner and availability.

Lists the currently available models, and provides basic information about each one such as the owner and availability.

Attributes

Returns:

models

See also:
Inherited from:
OpenAICoreService

Inherited fields

protected val configFileName: String

Attributes

Inherited from:
OpenAIServiceConsts
protected val configPrefix: String

Attributes

Inherited from:
OpenAIServiceConsts
protected val defaultCoreUrl: String

Attributes

Inherited from:
OpenAIServiceConsts