Interface TextInferenceConfig.Builder
-
- All Superinterfaces:
Buildable
,CopyableBuilder<TextInferenceConfig.Builder,TextInferenceConfig>
,SdkBuilder<TextInferenceConfig.Builder,TextInferenceConfig>
,SdkPojo
- Enclosing class:
- TextInferenceConfig
public static interface TextInferenceConfig.Builder extends SdkPojo, CopyableBuilder<TextInferenceConfig.Builder,TextInferenceConfig>
-
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description TextInferenceConfig.Builder
maxTokens(Integer maxTokens)
The maximum number of tokens to generate in the output text.TextInferenceConfig.Builder
stopSequences(String... stopSequences)
A list of sequences of characters that, if generated, will cause the model to stop generating further tokens.TextInferenceConfig.Builder
stopSequences(Collection<String> stopSequences)
A list of sequences of characters that, if generated, will cause the model to stop generating further tokens.TextInferenceConfig.Builder
temperature(Float temperature)
Controls the random-ness of text generated by the language model, influencing how much the model sticks to the most predictable next words versus exploring more surprising options.TextInferenceConfig.Builder
topP(Float topP)
A probability distribution threshold which controls what the model considers for the set of possible next tokens.-
Methods inherited from interface software.amazon.awssdk.utils.builder.CopyableBuilder
copy
-
Methods inherited from interface software.amazon.awssdk.utils.builder.SdkBuilder
applyMutation, build
-
Methods inherited from interface software.amazon.awssdk.core.SdkPojo
equalsBySdkFields, sdkFields
-
-
-
-
Method Detail
-
maxTokens
TextInferenceConfig.Builder maxTokens(Integer maxTokens)
The maximum number of tokens to generate in the output text. Do not use the minimum of 0 or the maximum of 65536. The limit values described here are arbitary values, for actual values consult the limits defined by your specific model.
- Parameters:
maxTokens
- The maximum number of tokens to generate in the output text. Do not use the minimum of 0 or the maximum of 65536. The limit values described here are arbitary values, for actual values consult the limits defined by your specific model.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
stopSequences
TextInferenceConfig.Builder stopSequences(Collection<String> stopSequences)
A list of sequences of characters that, if generated, will cause the model to stop generating further tokens. Do not use a minimum length of 1 or a maximum length of 1000. The limit values described here are arbitary values, for actual values consult the limits defined by your specific model.
- Parameters:
stopSequences
- A list of sequences of characters that, if generated, will cause the model to stop generating further tokens. Do not use a minimum length of 1 or a maximum length of 1000. The limit values described here are arbitary values, for actual values consult the limits defined by your specific model.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
stopSequences
TextInferenceConfig.Builder stopSequences(String... stopSequences)
A list of sequences of characters that, if generated, will cause the model to stop generating further tokens. Do not use a minimum length of 1 or a maximum length of 1000. The limit values described here are arbitary values, for actual values consult the limits defined by your specific model.
- Parameters:
stopSequences
- A list of sequences of characters that, if generated, will cause the model to stop generating further tokens. Do not use a minimum length of 1 or a maximum length of 1000. The limit values described here are arbitary values, for actual values consult the limits defined by your specific model.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
temperature
TextInferenceConfig.Builder temperature(Float temperature)
Controls the random-ness of text generated by the language model, influencing how much the model sticks to the most predictable next words versus exploring more surprising options. A lower temperature value (e.g. 0.2 or 0.3) makes model outputs more deterministic or predictable, while a higher temperature (e.g. 0.8 or 0.9) makes the outputs more creative or unpredictable.
- Parameters:
temperature
- Controls the random-ness of text generated by the language model, influencing how much the model sticks to the most predictable next words versus exploring more surprising options. A lower temperature value (e.g. 0.2 or 0.3) makes model outputs more deterministic or predictable, while a higher temperature (e.g. 0.8 or 0.9) makes the outputs more creative or unpredictable.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
topP
TextInferenceConfig.Builder topP(Float topP)
A probability distribution threshold which controls what the model considers for the set of possible next tokens. The model will only consider the top p% of the probability distribution when generating the next token.
- Parameters:
topP
- A probability distribution threshold which controls what the model considers for the set of possible next tokens. The model will only consider the top p% of the probability distribution when generating the next token.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
-