Overview
The chat() method provides the simplest way to interact with an LLM. It takes a prompt and returns just the response text, making it perfect for quick integrations.
Basic Usage
import { RunAnywhere } from '@runanywhere/core'
// Simple question
const response = await RunAnywhere . chat ( 'What is the capital of France?' )
console . log ( response ) // "Paris is the capital of France."
// Follow-up (stateless - each call is independent)
const answer = await RunAnywhere . chat ( 'What is 2 + 2?' )
console . log ( answer ) // "4"
API Reference
await RunAnywhere . chat ( prompt : string ): Promise < string >
Parameters
Parameter Type Description promptstringThe user’s message or question
Returns
Type Description Promise<string>The generated response text
Throws
Error Code Description notInitializedSDK not initialized modelNotLoadedNo LLM model loaded generationFailedGeneration failed
Examples
Simple Q&A
const capital = await RunAnywhere . chat ( 'What is the capital of Japan?' )
// "Tokyo is the capital of Japan."
const math = await RunAnywhere . chat ( 'Calculate 15% of 200' )
// "15% of 200 is 30."
const translation = await RunAnywhere . chat ( 'Translate "Hello" to Spanish' )
// "Hola"
React Component
import React , { useState } from 'react'
import { View , TextInput , Button , Text } from 'react-native'
import { RunAnywhere } from '@runanywhere/core'
export function ChatComponent () {
const [ input , setInput ] = useState ( '' )
const [ response , setResponse ] = useState ( '' )
const [ loading , setLoading ] = useState ( false )
const handleSend = async () => {
if ( ! input . trim ()) return
setLoading ( true )
try {
const answer = await RunAnywhere . chat ( input )
setResponse ( answer )
} catch ( error ) {
setResponse ( 'Error: ' + ( error as Error ). message )
} finally {
setLoading ( false )
}
}
return (
< View style = { { padding: 16 } } >
< TextInput
value = { input }
onChangeText = { setInput }
placeholder = "Ask anything..."
style = { { borderWidth: 1 , padding: 8 , marginBottom: 8 } }
/>
< Button title = { loading ? 'Thinking...' : 'Send' } onPress = { handleSend } disabled = { loading } />
{ response ? < Text style = { { marginTop: 16 } } > { response } </ Text > : null }
</ View >
)
}
With Error Handling
import { RunAnywhere , isSDKError , SDKErrorCode } from '@runanywhere/core'
async function askQuestion ( question : string ) : Promise < string > {
try {
return await RunAnywhere . chat ( question )
} catch ( error ) {
if ( isSDKError ( error )) {
switch ( error . code ) {
case SDKErrorCode . notInitialized :
throw new Error ( 'Please initialize the SDK first' )
case SDKErrorCode . modelNotLoaded :
throw new Error ( 'Please load a model first' )
default :
throw new Error ( 'AI error: ' + error . message )
}
}
throw error
}
}
// Usage
const response = await askQuestion ( 'What is AI?' )
Chat vs Generate
Feature chat()generate()Return type stringGenerationResultMetrics No Yes (tokens, latency, tok/s) Options No Yes (maxTokens, temperature, etc.) System prompt No Yes Use case Quick prototyping Production apps
Use chat() for quick prototyping and simple interactions. Switch to generate() when you need
performance metrics, custom options, or system prompts.
Generate Full generation with options and metrics
Streaming Real-time token streaming