Skip to main content
The simplest way to generate text—returns just the response string.
// One-liner for quick responses
final response = await RunAnywhere.chat('What is the capital of France?');
print(response);  // "The capital of France is Paris."

When to Use

Use chat() when you:
  • Need a quick response without metrics
  • Don’t need streaming
  • Want the simplest possible API

Requirements

Before calling chat(), ensure:
  1. SDK is initialized: await RunAnywhere.initialize()
  2. Backend is registered: await LlamaCpp.register()
  3. Model is loaded: await RunAnywhere.loadModel('model-id')

Example with Error Handling

Future<String> askQuestion(String question) async {
  try {
    if (!RunAnywhere.isModelLoaded) {
      throw SDKError.componentNotReady('No model loaded');
    }
    return await RunAnywhere.chat(question);
  } on SDKError catch (e) {
    return 'Error: ${e.message}';
  }
}

See Also