generate() method provides complete control over text generation with detailed performance metrics and customizable options.
Basic Usage
Method Signature
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | String | The text prompt |
options | LLMGenerationOptions? | Generation configuration (optional) |
Returns
AnLLMGenerationResult containing the response and metrics.
LLMGenerationResult
LLMGenerationOptions
Generation Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
maxTokens | Int | 100 | Maximum tokens to generate |
temperature | Float | 0.8 | Controls randomness (0.0 = deterministic, 2.0 = very random) |
topP | Float | 1.0 | Nucleus sampling threshold |
stopSequences | [String] | [] | Stop generation at these strings |
streamingEnabled | Bool | false | Enable token-by-token streaming |
preferredFramework | InferenceFramework? | nil | Preferred backend framework |
systemPrompt | String? | nil | System prompt for behavior |
Examples
Basic Generation
With Custom Options
With System Prompt
For Reasoning Models
Some models output their reasoning process. Extract it withthinkingContent:
Performance Monitoring
Structured Output
Generate type-safe structured output using theGeneratable protocol:
Error Handling
Temperature Guide
| Temperature | Use Case |
|---|---|
| 0.0 | Deterministic, factual answers |
| 0.3-0.5 | Focused, coherent responses |
| 0.7-0.8 | Balanced creativity (default) |
| 1.0-1.2 | Creative writing, brainstorming |
| 1.5+ | Very random, experimental |