Enable LLMs to call functions and interact with external tools
Tool Calling lets on-device LLMs invoke functions you define — turning a text model into an agent that can fetch data, perform calculations, or interact with system APIs. The SDK handles prompt formatting, tool call parsing, execution, and multi-turn orchestration.
Call clearTools() before registering tools to ensure a clean state — especially important if
your app re-registers tools across different screens or sessions.
public struct ToolDefinition { public let name: String public let description: String public let parameters: [ToolParameter] public init( name: String, description: String, parameters: [ToolParameter] )}
public struct ToolParameter { public let name: String public let type: ToolParameterType // .string, .number, .boolean public let description: String public let required: Bool public init( name: String, type: ToolParameterType, description: String, required: Bool = true )}
public struct ToolCallingOptions { public let maxToolCalls: Int // Max tool invocations per generation (default: 5) public let autoExecute: Bool // Auto-execute tool calls (default: true) public let temperature: Double // LLM temperature (default: 0.7) public let maxTokens: Int // Max tokens to generate (default: 512) public init( maxToolCalls: Int = 5, autoExecute: Bool = true, temperature: Double = 0.7, maxTokens: Int = 512 )}
let weatherTool = ToolDefinition( name: "get_weather", description: "Get current weather for a city including temperature and conditions", parameters: [ ToolParameter(name: "city", type: .string, description: "City name", required: true) ])await RunAnywhere.registerTool(weatherTool) { args -> [String: ToolValue] in let city = args["city"]?.stringValue ?? "Unknown" // In production, call a real weather API here let conditions: [String: (Double, String)] = [ "san francisco": (62.0, "foggy"), "new york": (45.0, "cloudy"), "miami": (82.0, "sunny"), ] let key = city.lowercased() let (temp, condition) = conditions[key] ?? (70.0, "clear") return [ "temperature": .number(temp), "condition": .string(condition), "city": .string(city) ]}
Register multiple tools and let the LLM chain them together:
Copy
Ask AI
await RunAnywhere.clearTools()// Register weather, calculator, and time tools (as defined above)await RunAnywhere.registerTool(weatherTool) { /* handler */ }await RunAnywhere.registerTool(calcTool) { /* handler */ }await RunAnywhere.registerTool(timeTool) { /* handler */ }let options = ToolCallingOptions( maxToolCalls: 5, autoExecute: true, temperature: 0.7, maxTokens: 512)let result = try await RunAnywhere.generateWithTools( prompt: "What's the weather in Miami? Also, what's 15% tip on a $47.50 bill?", options: options)// The LLM will call get_weather AND calculate, then compose a final answerprint(result.text)print("Tools called: \(result.toolCalls.map(\.toolName))")
do { let result = try await RunAnywhere.generateWithTools( prompt: prompt, options: options ) // Check individual tool results for failures for toolResult in result.toolResults { if !toolResult.success { print("Tool failed: \(toolResult.error ?? "Unknown error")") } }} catch let error as SDKError { switch error.code { case .notInitialized: print("Load an LLM model before using tool calling") case .processingFailed: print("Generation failed: \(error.message)") default: print("Tool calling error: \(error)") }}
Tool handlers run in an async context. If your handler accesses @MainActor-isolated state, use
await MainActor.run {} inside the closure.
The LLM decides which tools to call based on their description and parameter descriptions. Be specific — vague descriptions lead to incorrect tool selection or hallucinated arguments.
Clear tools between sessions
Call clearTools() at the start of each session or screen to prevent stale tool registrations
from interfering with new ones.
Limit maxToolCalls
Set maxToolCalls to a reasonable bound (2–5) to prevent runaway loops where the LLM repeatedly
calls tools without converging on a final answer.
Validate tool arguments
Always validate and provide defaults for arguments in your handler. The LLM may omit optional parameters or pass unexpected values.
Copy
Ask AI
await RunAnywhere.registerTool(tool) { args -> [String: ToolValue] in guard let city = args["city"]?.stringValue, !city.isEmpty else { return ["error": .string("City parameter is required")] } // ...}
Keep handlers fast
Tool handlers block the generation loop. For slow operations (network calls, database queries),
consider timeouts to prevent the UI from hanging.
Use lower temperature
Tool calling works best with lower temperature values (0.3–0.7). Higher temperatures increase the chance of malformed tool call syntax.