Builder for LlmInference.LlmInferenceOptions.
Public Constructors
|
Builder()
|
Public Methods
| final LlmInference.LlmInferenceOptions |
build()
Validates and builds the
ERROR(/ImageGeneratorOptions) instance. |
| abstract LlmInference.LlmInferenceOptions.Builder | |
| abstract LlmInference.LlmInferenceOptions.Builder |
setLoraPath(String loraPath)
The absolute path to the LoRA model asset bundle stored locally on the device.
|
| abstract LlmInference.LlmInferenceOptions.Builder |
setMaxTokens(int maxTokens)
Configures the total number of tokens for input and output).
|
| abstract LlmInference.LlmInferenceOptions.Builder | |
| abstract LlmInference.LlmInferenceOptions.Builder |
setRandomSeed(int randomSeed)
Configures random seed for sampling tokens.
|
| abstract LlmInference.LlmInferenceOptions.Builder |
setResultListener(ProgressListener<String> listener)
Sets the result listener to invoke with the async API.
|
| abstract LlmInference.LlmInferenceOptions.Builder |
setTemperature(float temperature)
Configures randomness when decoding the next token.
|
| abstract LlmInference.LlmInferenceOptions.Builder |
setTopK(int topK)
Configures the top K number of tokens to be sampled from for each decoding step.
|
Inherited Methods
Public Constructors
public Builder ()
Public Methods
public final LlmInference.LlmInferenceOptions build ()
Validates and builds the ERROR(/ImageGeneratorOptions) instance.
public abstract LlmInference.LlmInferenceOptions.Builder setErrorListener (ErrorListener listener)
Sets the error listener to invoke with the async API.
Parameters
| listener |
|---|
public abstract LlmInference.LlmInferenceOptions.Builder setLoraPath (String loraPath)
The absolute path to the LoRA model asset bundle stored locally on the device. This is only compatible with GPU models.
Parameters
| loraPath |
|---|
public abstract LlmInference.LlmInferenceOptions.Builder setMaxTokens (int maxTokens)
Configures the total number of tokens for input and output).
Parameters
| maxTokens |
|---|
public abstract LlmInference.LlmInferenceOptions.Builder setModelPath (String modelPath)
Sets the model path for the text generator task.
Parameters
| modelPath |
|---|
public abstract LlmInference.LlmInferenceOptions.Builder setRandomSeed (int randomSeed)
Configures random seed for sampling tokens.
Parameters
| randomSeed |
|---|
public abstract LlmInference.LlmInferenceOptions.Builder setResultListener (ProgressListener<String> listener)
Sets the result listener to invoke with the async API.
Parameters
| listener |
|---|
public abstract LlmInference.LlmInferenceOptions.Builder setTemperature (float temperature)
Configures randomness when decoding the next token. A value of 0.0f means greedy decoding. The default value is 0.8f.
Parameters
| temperature |
|---|
public abstract LlmInference.LlmInferenceOptions.Builder setTopK (int topK)
Configures the top K number of tokens to be sampled from for each decoding step. A value of 1 means greedy decoding. The default value is 40.
Parameters
| topK |
|---|