LLMGenerationOptions
| Parameter | Type | Default | Description |
|---|---|---|---|
maxTokens | int | 100 | Maximum tokens to generate |
temperature | double | 0.8 | Randomness (0.0–2.0) |
topP | double | 1.0 | Nucleus sampling parameter |
stopSequences | List<String> | [] | Stop generation at these |
systemPrompt | String? | null | System prompt for context |
LLMGenerationResult
| Property | Type | Description |
|---|---|---|
text | String | Generated text |
thinkingContent | String? | Thinking content (if supported) |
inputTokens | int | Number of input tokens |
tokensUsed | int | Number of output tokens |
modelUsed | String | Model ID used |
latencyMs | double | Total latency in milliseconds |
tokensPerSecond | double | Generation speed |
timeToFirstTokenMs | double? | Time to first token |