Early Beta — The Web SDK is in early beta. APIs may change between releases.
Overview
System prompts define the AI’s behavior, personality, and constraints. They’re prepended to every generation request and help ensure consistent, focused responses.
Basic Usage
import { TextGeneration } from '@runanywhere/web'
const result = await TextGeneration.generate('What should I cook tonight?', {
systemPrompt: 'You are a helpful chef. Suggest recipes that are quick and easy to make.',
maxTokens: 200,
})
Examples
Coding Assistant
const result = await TextGeneration.generate('How do I sort an array in JavaScript?', {
systemPrompt: `You are an expert JavaScript developer. Provide concise code examples
with brief explanations. Use modern ES6+ syntax.`,
maxTokens: 300,
})
Concise Responder
const result = await TextGeneration.generate('Explain quantum entanglement', {
systemPrompt: 'You are a science communicator. Keep all responses under 3 sentences.',
maxTokens: 100,
temperature: 0.3,
})
JSON Output
const result = await TextGeneration.generate('List 3 popular programming languages', {
systemPrompt: `You are a helpful assistant. Always respond in valid JSON format.
Example: {"items": ["item1", "item2"]}`,
maxTokens: 200,
temperature: 0.1,
})
const data = JSON.parse(result.text)
Persona
const result = await TextGeneration.generate('Tell me about space exploration', {
systemPrompt: `You are Captain Nova, an enthusiastic space explorer.
You speak with excitement about space and use space-related analogies.
Keep responses brief and engaging.`,
maxTokens: 200,
temperature: 0.8,
})
System Prompt Tips
| Technique | Example | Effect |
|---|
| Role definition | ”You are a helpful tutor” | Sets personality |
| Output format | ”Respond in JSON” | Controls format |
| Length constraint | ”Keep responses under 2 sentences” | Controls length |
| Tone | ”Be professional and formal” | Sets communication style |
| Constraints | ”Never discuss politics” | Limits topics |
Structured Output with System Prompts
For type-safe JSON generation, combine system prompts with the StructuredOutput module:
import { TextGeneration, StructuredOutput } from '@runanywhere/web'
const schema = JSON.stringify({
type: 'object',
properties: {
name: { type: 'string' },
ingredients: { type: 'array', items: { type: 'string' } },
prepTime: { type: 'number' },
},
required: ['name', 'ingredients', 'prepTime'],
})
const systemPrompt = StructuredOutput.getSystemPrompt(schema)
const result = await TextGeneration.generate('Suggest a quick pasta recipe', {
systemPrompt,
maxTokens: 300,
temperature: 0.3,
})
const extracted = StructuredOutput.extractJson(result.text)
if (extracted) {
const recipe = JSON.parse(extracted)
console.log(recipe.name, recipe.ingredients, recipe.prepTime)
}
Best Practices
Keep system prompts concise. Longer system prompts consume more of the model’s context window,
leaving less room for the user’s prompt and the response.
- Be specific — “You are a Python expert” is better than “You are helpful”
- Set output format early — Place format instructions at the start of the system prompt
- Use examples — Show the model what good output looks like
- Set constraints — Tell the model what NOT to do (e.g., “Do not include disclaimers”)
- Keep it short — Every token in the system prompt reduces available context