Skip to main content

Overview

The chat() method provides the simplest way to interact with an LLM. It takes a prompt and returns just the response text, making it perfect for quick integrations.

Basic Usage

import { RunAnywhere } from '@runanywhere/core'

// Simple question
const response = await RunAnywhere.chat('What is the capital of France?')
console.log(response) // "Paris is the capital of France."

// Follow-up (stateless - each call is independent)
const answer = await RunAnywhere.chat('What is 2 + 2?')
console.log(answer) // "4"

API Reference

await RunAnywhere.chat(prompt: string): Promise<string>

Parameters

ParameterTypeDescription
promptstringThe user’s message or question

Returns

TypeDescription
Promise<string>The generated response text

Throws

Error CodeDescription
notInitializedSDK not initialized
modelNotLoadedNo LLM model loaded
generationFailedGeneration failed

Examples

Simple Q&A

const capital = await RunAnywhere.chat('What is the capital of Japan?')
// "Tokyo is the capital of Japan."

const math = await RunAnywhere.chat('Calculate 15% of 200')
// "15% of 200 is 30."

const translation = await RunAnywhere.chat('Translate "Hello" to Spanish')
// "Hola"

React Component

ChatComponent.tsx
import React, { useState } from 'react'
import { View, TextInput, Button, Text } from 'react-native'
import { RunAnywhere } from '@runanywhere/core'

export function ChatComponent() {
  const [input, setInput] = useState('')
  const [response, setResponse] = useState('')
  const [loading, setLoading] = useState(false)

  const handleSend = async () => {
    if (!input.trim()) return

    setLoading(true)
    try {
      const answer = await RunAnywhere.chat(input)
      setResponse(answer)
    } catch (error) {
      setResponse('Error: ' + (error as Error).message)
    } finally {
      setLoading(false)
    }
  }

  return (
    <View style={{ padding: 16 }}>
      <TextInput
        value={input}
        onChangeText={setInput}
        placeholder="Ask anything..."
        style={{ borderWidth: 1, padding: 8, marginBottom: 8 }}
      />
      <Button title={loading ? 'Thinking...' : 'Send'} onPress={handleSend} disabled={loading} />
      {response ? <Text style={{ marginTop: 16 }}>{response}</Text> : null}
    </View>
  )
}

With Error Handling

import { RunAnywhere, isSDKError, SDKErrorCode } from '@runanywhere/core'

async function askQuestion(question: string): Promise<string> {
  try {
    return await RunAnywhere.chat(question)
  } catch (error) {
    if (isSDKError(error)) {
      switch (error.code) {
        case SDKErrorCode.notInitialized:
          throw new Error('Please initialize the SDK first')
        case SDKErrorCode.modelNotLoaded:
          throw new Error('Please load a model first')
        default:
          throw new Error('AI error: ' + error.message)
      }
    }
    throw error
  }
}

// Usage
const response = await askQuestion('What is AI?')

Chat vs Generate

Featurechat()generate()
Return typestringGenerationResult
MetricsNoYes (tokens, latency, tok/s)
OptionsNoYes (maxTokens, temperature, etc.)
System promptNoYes
Use caseQuick prototypingProduction apps
Use chat() for quick prototyping and simple interactions. Switch to generate() when you need performance metrics, custom options, or system prompts.