Skip to main content
RunAnywhere Documentation

What is RunAnywhere?

RunAnywhere is a production-grade, on-device AI platform that enables developers to run AI models directly on mobile devices. Our SDKs provide a unified API for running AI inference locally — ensuring minimal latency, maximum privacy, and offline capability.
All AI inference runs 100% on-device. Once models are downloaded, no network connection is required for inference.

Core AI Capabilities

Every RunAnywhere SDK provides access to these powerful AI features:

Voice Agent Pipeline

Build complete voice-powered experiences with our integrated Voice Agent that orchestrates: This enables full voice conversation flows with streaming and batch processing modes.

Why RunAnywhere?

Audio and text data never leaves the device unless you explicitly configure it. Only anonymous analytics are collected by default. Your users’ data stays on their device.
On-device inference eliminates network round-trips. Get AI responses in milliseconds, not seconds. Perfect for real-time voice interactions and responsive UIs.
Once models are downloaded, your app works completely offline. No internet required for inference. Ideal for apps used in areas with poor connectivity.
Backend engines are optional modules—include only what you need. Keep your app binary size minimal by importing only the capabilities you use.

Supported Platforms

RunAnywhere provides native SDKs for all major mobile platforms:

Get Started

1

Choose Your SDK

Select the SDK that matches your tech stack from the options above
2

Install the SDK

Follow the installation guide for your chosen platform
3

Initialize & Build

Initialize the SDK and start building AI-powered features