Skip to main content
Tool Calling lets on-device LLMs invoke functions you define — turning a text model into an agent that can fetch data, perform calculations, or interact with system APIs. The SDK handles prompt formatting, tool call parsing, execution, and multi-turn orchestration.

Overview

The tool calling flow:
  1. Register tools with definitions and handler closures
  2. Generate with tools — the LLM decides when and which tools to call
  3. Auto-execution — the SDK runs your handlers and feeds results back to the LLM
  4. Final response — the LLM produces a natural language answer incorporating tool results

Basic Usage

import RunAnywhere

// 1. Clear any previously registered tools
await RunAnywhere.clearTools()

// 2. Define and register a tool
let weatherTool = ToolDefinition(
    name: "get_weather",
    description: "Get current weather for a city",
    parameters: [
        ToolParameter(name: "city", type: .string, description: "City name", required: true)
    ]
)

await RunAnywhere.registerTool(weatherTool) { args -> [String: ToolValue] in
    let city = args["city"]?.stringValue ?? "Unknown"
    return [
        "temperature": .number(72.0),
        "condition": .string("sunny"),
        "city": .string(city)
    ]
}

// 3. Generate with tools
let options = ToolCallingOptions(
    maxToolCalls: 3,
    autoExecute: true,
    temperature: 0.7,
    maxTokens: 512
)

let result = try await RunAnywhere.generateWithTools(
    prompt: "What's the weather in San Francisco?",
    options: options
)

print(result.text)        // "The weather in San Francisco is 72°F and sunny."
print(result.toolCalls)   // [ToolCall(toolName: "get_weather", ...)]
print(result.toolResults) // [ToolResult(success: true, ...)]

Setup

Register Tools

Each tool needs a ToolDefinition and a handler closure that receives arguments and returns a result dictionary:
await RunAnywhere.clearTools()

let tool = ToolDefinition(
    name: "tool_name",
    description: "What this tool does",
    parameters: [
        ToolParameter(name: "param1", type: .string, description: "First parameter", required: true),
        ToolParameter(name: "param2", type: .number, description: "Optional parameter", required: false)
    ]
)

await RunAnywhere.registerTool(tool) { args -> [String: ToolValue] in
    let value = args["param1"]?.stringValue ?? ""
    return ["result": .string("Processed: \(value)")]
}
Call clearTools() before registering tools to ensure a clean state — especially important if your app re-registers tools across different screens or sessions.

Ensure LLM is Loaded

Tool calling requires a loaded LLM model. The LLM processes the prompt and decides which tools to invoke:
try await RunAnywhere.loadModel(modelId: "llama-3.2-1b-instruct-q4")

API Reference

Tool Management

MethodDescription
RunAnywhere.registerTool(_ definition: ToolDefinition, handler: @escaping ([String: ToolValue]) async -> [String: ToolValue]) asyncRegister a tool with its definition and execution handler
RunAnywhere.clearTools() asyncRemove all registered tools

ToolDefinition

public struct ToolDefinition {
    public let name: String
    public let description: String
    public let parameters: [ToolParameter]

    public init(
        name: String,
        description: String,
        parameters: [ToolParameter]
    )
}

ToolParameter

public struct ToolParameter {
    public let name: String
    public let type: ToolParameterType  // .string, .number, .boolean
    public let description: String
    public let required: Bool

    public init(
        name: String,
        type: ToolParameterType,
        description: String,
        required: Bool = true
    )
}

ToolValue

Tagged union for tool arguments and return values:
CaseDescriptionAccessor
.string(String)String value.stringValueString?
.number(Double)Numeric value.numberValueDouble?
// In a tool handler
let city = args["city"]?.stringValue ?? "Unknown"

// Returning results
return [
    "name": .string("San Francisco"),
    "temp": .number(72.0)
]

ToolCallingOptions

public struct ToolCallingOptions {
    public let maxToolCalls: Int      // Max tool invocations per generation (default: 5)
    public let autoExecute: Bool      // Auto-execute tool calls (default: true)
    public let temperature: Double    // LLM temperature (default: 0.7)
    public let maxTokens: Int         // Max tokens to generate (default: 512)

    public init(
        maxToolCalls: Int = 5,
        autoExecute: Bool = true,
        temperature: Double = 0.7,
        maxTokens: Int = 512
    )
}

generateWithTools

let result = try await RunAnywhere.generateWithTools(
    prompt: String,
    options: ToolCallingOptions
)
Returns a ToolCallingResult:
PropertyTypeDescription
textStringFinal natural language response
toolCalls[ToolCall]All tool calls the LLM made
toolResults[ToolResult]Results from tool executions

ToolCall

PropertyTypeDescription
toolNameStringName of the called tool
arguments[String: ToolValue]Arguments passed by the LLM

ToolResult

PropertyTypeDescription
successBoolWhether execution succeeded
result[String: ToolValue]?Result dictionary on success
errorString?Error message on failure

Examples

Weather Tool

let weatherTool = ToolDefinition(
    name: "get_weather",
    description: "Get current weather for a city including temperature and conditions",
    parameters: [
        ToolParameter(name: "city", type: .string, description: "City name", required: true)
    ]
)

await RunAnywhere.registerTool(weatherTool) { args -> [String: ToolValue] in
    let city = args["city"]?.stringValue ?? "Unknown"

    // In production, call a real weather API here
    let conditions: [String: (Double, String)] = [
        "san francisco": (62.0, "foggy"),
        "new york": (45.0, "cloudy"),
        "miami": (82.0, "sunny"),
    ]

    let key = city.lowercased()
    let (temp, condition) = conditions[key] ?? (70.0, "clear")

    return [
        "temperature": .number(temp),
        "condition": .string(condition),
        "city": .string(city)
    ]
}

Calculator Tool

let calcTool = ToolDefinition(
    name: "calculate",
    description: "Evaluate a mathematical expression and return the numeric result",
    parameters: [
        ToolParameter(name: "expression", type: .string, description: "Math expression (e.g. '2 + 3 * 4')", required: true)
    ]
)

await RunAnywhere.registerTool(calcTool) { args -> [String: ToolValue] in
    let expr = args["expression"]?.stringValue ?? "0"
    let expression = NSExpression(format: expr)
    if let result = expression.expressionValue(with: nil, context: nil) as? NSNumber {
        return ["result": .number(result.doubleValue)]
    }
    return ["error": .string("Could not evaluate expression")]
}

Current Time Tool

let timeTool = ToolDefinition(
    name: "get_current_time",
    description: "Get the current date and time",
    parameters: []
)

await RunAnywhere.registerTool(timeTool) { _ -> [String: ToolValue] in
    let formatter = DateFormatter()
    formatter.dateFormat = "yyyy-MM-dd HH:mm:ss zzz"
    return [
        "datetime": .string(formatter.string(from: Date())),
        "timestamp": .number(Date().timeIntervalSince1970)
    ]
}

Multi-Tool Chat Flow

Register multiple tools and let the LLM chain them together:
await RunAnywhere.clearTools()

// Register weather, calculator, and time tools (as defined above)
await RunAnywhere.registerTool(weatherTool) { /* handler */ }
await RunAnywhere.registerTool(calcTool) { /* handler */ }
await RunAnywhere.registerTool(timeTool) { /* handler */ }

let options = ToolCallingOptions(
    maxToolCalls: 5,
    autoExecute: true,
    temperature: 0.7,
    maxTokens: 512
)

let result = try await RunAnywhere.generateWithTools(
    prompt: "What's the weather in Miami? Also, what's 15% tip on a $47.50 bill?",
    options: options
)

// The LLM will call get_weather AND calculate, then compose a final answer
print(result.text)
print("Tools called: \(result.toolCalls.map(\.toolName))")

Complete SwiftUI Example

import SwiftUI
import RunAnywhere

@Observable
class ToolCallingViewModel {
    var prompt = ""
    var response = ""
    var toolLog: [String] = []
    var isProcessing = false

    func setup() async {
        await RunAnywhere.clearTools()

        let weatherTool = ToolDefinition(
            name: "get_weather",
            description: "Get current weather for a city",
            parameters: [
                ToolParameter(name: "city", type: .string, description: "City name", required: true)
            ]
        )

        await RunAnywhere.registerTool(weatherTool) { [weak self] args -> [String: ToolValue] in
            let city = args["city"]?.stringValue ?? "Unknown"
            await MainActor.run {
                self?.toolLog.append("🔧 get_weather(city: \"\(city)\")")
            }
            return [
                "temperature": .number(72.0),
                "condition": .string("sunny"),
                "city": .string(city)
            ]
        }
    }

    func send() async {
        guard !prompt.isEmpty else { return }

        isProcessing = true
        response = ""
        toolLog = []

        do {
            let options = ToolCallingOptions(
                maxToolCalls: 3,
                autoExecute: true,
                temperature: 0.7,
                maxTokens: 512
            )

            let result = try await RunAnywhere.generateWithTools(
                prompt: prompt,
                options: options
            )

            response = result.text

            for toolResult in result.toolResults {
                let status = toolResult.success ? "✅" : "❌"
                toolLog.append("\(status) Result: \(toolResult.result ?? [:])")
            }
        } catch {
            response = "Error: \(error.localizedDescription)"
        }

        isProcessing = false
    }
}

struct ToolCallingView: View {
    @State private var viewModel = ToolCallingViewModel()

    var body: some View {
        NavigationStack {
            VStack(spacing: 16) {
                // Tool execution log
                if !viewModel.toolLog.isEmpty {
                    VStack(alignment: .leading, spacing: 4) {
                        Text("Tool Log")
                            .font(.caption.bold())
                            .foregroundStyle(.secondary)
                        ForEach(viewModel.toolLog, id: \.self) { entry in
                            Text(entry)
                                .font(.caption.monospaced())
                        }
                    }
                    .padding()
                    .frame(maxWidth: .infinity, alignment: .leading)
                    .background(.ultraThinMaterial)
                    .clipShape(RoundedRectangle(cornerRadius: 8))
                }

                // Response
                if !viewModel.response.isEmpty {
                    Text(viewModel.response)
                        .textSelection(.enabled)
                        .padding()
                        .frame(maxWidth: .infinity, alignment: .leading)
                        .background(Color.blue.opacity(0.1))
                        .clipShape(RoundedRectangle(cornerRadius: 8))
                }

                Spacer()

                // Input
                HStack {
                    TextField("Ask something...", text: $viewModel.prompt)
                        .textFieldStyle(.roundedBorder)

                    Button("Send") {
                        Task { await viewModel.send() }
                    }
                    .buttonStyle(.borderedProminent)
                    .disabled(viewModel.prompt.isEmpty || viewModel.isProcessing)
                }

                if viewModel.isProcessing {
                    ProgressView("Processing...")
                }
            }
            .padding()
            .navigationTitle("Tool Calling")
            .task { await viewModel.setup() }
        }
    }
}

Error Handling

do {
    let result = try await RunAnywhere.generateWithTools(
        prompt: prompt,
        options: options
    )

    // Check individual tool results for failures
    for toolResult in result.toolResults {
        if !toolResult.success {
            print("Tool failed: \(toolResult.error ?? "Unknown error")")
        }
    }
} catch let error as SDKError {
    switch error.code {
    case .notInitialized:
        print("Load an LLM model before using tool calling")
    case .processingFailed:
        print("Generation failed: \(error.message)")
    default:
        print("Tool calling error: \(error)")
    }
}
Tool handlers run in an async context. If your handler accesses @MainActor-isolated state, use await MainActor.run {} inside the closure.

Best Practices

The LLM decides which tools to call based on their description and parameter descriptions. Be specific — vague descriptions lead to incorrect tool selection or hallucinated arguments.
Call clearTools() at the start of each session or screen to prevent stale tool registrations from interfering with new ones.
Set maxToolCalls to a reasonable bound (2–5) to prevent runaway loops where the LLM repeatedly calls tools without converging on a final answer.
Always validate and provide defaults for arguments in your handler. The LLM may omit optional parameters or pass unexpected values.
await RunAnywhere.registerTool(tool) { args -> [String: ToolValue] in
    guard let city = args["city"]?.stringValue, !city.isEmpty else {
        return ["error": .string("City parameter is required")]
    }
    // ...
}
Tool handlers block the generation loop. For slow operations (network calls, database queries), consider timeouts to prevent the UI from hanging.
Tool calling works best with lower temperature values (0.3–0.7). Higher temperatures increase the chance of malformed tool call syntax.