Skip to main content
The simplest way to generate text—a single function call that returns the response directly.
// One-liner for quick responses
val response = RunAnywhere.chat("What is the capital of France?")
println(response)  // "The capital of France is Paris."

When to Use

Use chat() when you need:
  • Quick, simple responses
  • No metrics or metadata
  • Minimal code
For more control, use generate() or generateStream().

Example: Simple Q&A

lifecycleScope.launch {
    try {
        val answer = RunAnywhere.chat("Explain machine learning in one sentence")
        textView.text = answer
    } catch (e: SDKError) {
        showError(e.message)
    }
}

Example: Chat Bot

suspend fun sendMessage(userMessage: String): String {
    // Ensure model is loaded
    if (!RunAnywhere.isLLMModelLoaded()) {
        RunAnywhere.loadLLMModel(modelId)
    }

    return RunAnywhere.chat(userMessage)
}
chat() uses default generation options. For custom temperature, max tokens, or system prompts, use generate() instead.