Overview
System prompts define how the AI behaves, its personality, constraints, and the format of responses. They’re set once and apply to all subsequent generations until changed.Basic Usage
Copy
Ask AI
import { RunAnywhere } from '@runanywhere/core'
const result = await RunAnywhere.generate('What is the best programming language?', {
maxTokens: 200,
systemPrompt:
'You are a helpful coding assistant. Be concise and practical. Answer in 2-3 sentences.',
})
Best Practices
Be Specific
Copy
Ask AI
// ❌ Too vague
const vague = {
systemPrompt: 'Be helpful.',
}
// ✅ Specific and clear
const specific = {
systemPrompt: `You are a senior software engineer helping junior developers.
- Give practical, production-ready code examples
- Explain your reasoning briefly
- Mention potential pitfalls when relevant`,
}
Define Output Format
Copy
Ask AI
// JSON output
const jsonPrompt = {
systemPrompt: `You are a data extraction assistant.
Always respond with valid JSON in this format:
{
"entities": ["entity1", "entity2"],
"sentiment": "positive" | "negative" | "neutral",
"confidence": 0.0-1.0
}
Do not include any text outside the JSON.`,
}
// List format
const listPrompt = {
systemPrompt: `You are a task breakdown assistant.
Always respond with a numbered list of actionable steps.
Keep each step to one sentence.
Include 3-7 steps per response.`,
}
Set Constraints
Copy
Ask AI
const constrained = {
systemPrompt: `You are a customer support assistant for TechCorp.
Rules:
- Only answer questions about our products
- Never discuss competitors
- If you don't know something, say "I'll connect you with a human agent"
- Keep responses under 100 words
- Be friendly but professional`,
}
Example Prompts
Coding Assistant
Copy
Ask AI
const codingAssistant = await RunAnywhere.generate('How do I read a file in Python?', {
maxTokens: 300,
systemPrompt: `You are an expert programmer helping developers.
- Provide working code examples
- Use modern best practices
- Include error handling
- Add brief comments explaining key parts
- Mention the Python version if relevant`,
})
Creative Writer
Copy
Ask AI
const creativeWriter = await RunAnywhere.generate('Write a story opening', {
maxTokens: 200,
temperature: 1.0,
systemPrompt: `You are a creative fiction writer.
- Use vivid, sensory descriptions
- Create engaging hooks
- Vary sentence length and structure
- Show, don't tell`,
})
Data Analyst
Copy
Ask AI
const dataAnalyst = await RunAnywhere.generate(
'Analyze these sales numbers: Q1: $10M, Q2: $12M, Q3: $9M, Q4: $15M',
{
maxTokens: 250,
temperature: 0.3,
systemPrompt: `You are a business analyst.
- Focus on trends and patterns
- Calculate relevant metrics (growth rate, average, etc.)
- Provide actionable insights
- Use bullet points for clarity`,
}
)
Language Tutor
Copy
Ask AI
const languageTutor = await RunAnywhere.generate(
'How do I say "Where is the train station?" in Japanese?',
{
maxTokens: 150,
systemPrompt: `You are a language learning assistant.
- Provide the translation with pronunciation guide
- Break down the grammar
- Give a literal translation
- Include one related useful phrase`,
}
)
Technical Support
Copy
Ask AI
const techSupport = await RunAnywhere.generate(
'My app keeps crashing when I click the submit button',
{
maxTokens: 300,
systemPrompt: `You are a technical support specialist.
- Ask clarifying questions if needed
- Provide step-by-step troubleshooting
- Start with the most common solutions
- Explain why each step might help`,
}
)
Multi-Turn Conversations
For multi-turn conversations, you can include conversation history in the prompt:Copy
Ask AI
interface Message {
role: 'user' | 'assistant'
content: string
}
async function chat(messages: Message[], newMessage: string): Promise<string> {
// Build conversation history
const history = messages
.map((m) => `${m.role === 'user' ? 'User' : 'Assistant'}: ${m.content}`)
.join('\n')
const prompt = history ? `${history}\nUser: ${newMessage}\nAssistant:` : newMessage
const result = await RunAnywhere.generate(prompt, {
maxTokens: 200,
systemPrompt: `You are a helpful assistant engaged in a conversation.
Respond naturally and maintain context from previous messages.
Be concise but complete.`,
})
return result.text
}
// Usage
const messages: Message[] = []
const response1 = await chat(messages, "What's TypeScript?")
messages.push({ role: 'user', content: "What's TypeScript?" })
messages.push({ role: 'assistant', content: response1 })
const response2 = await chat(messages, 'How does it compare to JavaScript?')
// Response will have context from the previous exchange
Prompt Templates
Create reusable prompt templates:Copy
Ask AI
const promptTemplates = {
codeReview: (language: string) => `You are a senior ${language} developer.
Review code for:
- Bugs and potential issues
- Performance problems
- Best practice violations
- Security concerns
Provide specific suggestions with examples.`,
summarize: (style: 'brief' | 'detailed') => `You are a summarization assistant.
Create a ${style === 'brief' ? 'one-paragraph' : 'comprehensive'} summary.
${style === 'detailed' ? 'Include key points as bullet points.' : 'Focus on the main takeaway.'}`,
translate: (targetLanguage: string) => `You are a professional translator.
Translate text to ${targetLanguage}.
- Preserve the original meaning and tone
- Use natural expressions in the target language
- Keep formatting (lists, paragraphs) intact`,
}
// Usage
const review = await RunAnywhere.generate(codeSnippet, {
maxTokens: 400,
systemPrompt: promptTemplates.codeReview('TypeScript'),
})
Tips
Keep system prompts concise. Long system prompts consume tokens and slow down generation.
Focus on the most important instructions.
Test with different temperatures. Lower temperatures (0.1-0.3) work better for
factual/structured outputs. Higher temperatures (0.7-1.2) work better for creative tasks.
System prompts are suggestions, not guarantees. Models may not always follow instructions
perfectly, especially with complex or conflicting rules. Test thoroughly.