CompletionsRequestBody

sttp.openai.requests.completions.CompletionsRequestBody

Attributes

Graph
Supertypes
class Object
trait Matchable
class Any
Self type

Members list

Type members

Classlikes

sealed abstract class CompletionModel(val value: String)

Attributes

Companion
object
Supertypes
class Object
trait Matchable
class Any
Known subtypes
object TextAda001
object TextCurie001
Show all

Attributes

Companion
class
Supertypes
trait Sum
trait Mirror
class Object
trait Matchable
class Any
Self type
case class CompletionsBody(model: CompletionModel, prompt: Option[Prompt], suffix: Option[String], maxTokens: Option[Int], temperature: Option[Double], topP: Option[Double], n: Option[Int], logprobs: Option[Int], echo: Option[Boolean], stop: Option[Stop], presencePenalty: Option[Double], frequencyPenalty: Option[Double], bestOf: Option[Int], logitBias: Option[Map[String, Float]], user: Option[String])

Value parameters

bestOf

Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token).

echo

Echo back the prompt in addition to the completion.

frequencyPenalty

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

logitBias

Modify the likelihood of specified tokens appearing in the completion.

logprobs

Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens.

maxTokens

The maximum number of tokens to generate in the completion.

model

ID of the model to use.

n

How many completions to generate for each prompt.

presencePenalty

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

prompt

The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.

stop

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

suffix

The suffix that comes after a completion of inserted text.

temperature

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

topP

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

user

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. For more information please visit: https://platform.openai.com/docs/api-reference/completions/create

Attributes

Companion
object
Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all

Attributes

Companion
class
Supertypes
trait Product
trait Mirror
class Object
trait Matchable
class Any
Self type
case class MultiplePrompt(values: Seq[String]) extends Prompt

Attributes

Supertypes
trait Serializable
trait Product
trait Equals
trait Prompt
class Object
trait Matchable
class Any
Show all
object Prompt

Attributes

Companion
trait
Supertypes
trait Sum
trait Mirror
class Object
trait Matchable
class Any
Self type
Prompt.type
sealed trait Prompt

Attributes

Companion
object
Supertypes
class Object
trait Matchable
class Any
Known subtypes
case class SinglePrompt(value: String) extends Prompt

Attributes

Supertypes
trait Serializable
trait Product
trait Equals
trait Prompt
class Object
trait Matchable
class Any
Show all