Early Beta — The Web SDK is in early beta. APIs may change between releases.
Overview
The TextGeneration.generate() method with default options provides the simplest way to generate text. For a one-line approach, just pass a prompt and get a result.
Basic Usage
import { TextGeneration } from '@runanywhere/web'
const result = await TextGeneration . generate ( 'What is the capital of France?' )
console . log ( result . text ) // "Paris is the capital of France."
API Reference
await TextGeneration . generate (
prompt : string ,
options ?: LLMGenerationOptions
): Promise < LLMGenerationResult >
Parameters
Parameter Type Description promptstringThe user’s message or question optionsLLMGenerationOptionsOptional generation settings
Returns
Type Description Promise<LLMGenerationResult>Result with text and performance metrics
Throws
Error Code Description NotInitializedSDK not initialized ModelNotLoadedNo LLM model loaded GenerationFailedGeneration failed
Examples
Simple Q&A
const capital = await TextGeneration . generate ( 'What is the capital of Japan?' )
// capital.text: "Tokyo is the capital of Japan."
const math = await TextGeneration . generate ( 'Calculate 15% of 200' )
// math.text: "15% of 200 is 30."
With Options
const result = await TextGeneration . generate ( 'Write a haiku about coding' , {
maxTokens: 50 ,
temperature: 0.8 ,
})
console . log ( result . text )
console . log ( ` ${ result . tokensPerSecond . toFixed ( 1 ) } tok/s | ${ result . latencyMs } ms` )
Error Handling
import { TextGeneration , SDKError , SDKErrorCode } from '@runanywhere/web'
try {
const result = await TextGeneration . generate ( 'Hello' )
} catch ( err ) {
if ( err instanceof SDKError ) {
switch ( err . code ) {
case SDKErrorCode . NotInitialized :
console . error ( 'Initialize the SDK first' )
break
case SDKErrorCode . ModelNotLoaded :
console . error ( 'Load a model first' )
break
default :
console . error ( `SDK error [ ${ err . code } ]: ${ err . message } ` )
}
}
}
Simple vs Full Generation
Feature Simple (defaults) Full (generate() with options) Return type LLMGenerationResultLLMGenerationResultMetrics Yes Yes Options Defaults Customizable System prompt None Yes Use case Quick prototyping Production apps
For quick prototyping, use generate() with just a prompt. Switch to adding options when you need
custom temperature, system prompts, or token limits.
Generate Full generation with options and metrics
Streaming Real-time token streaming