Skip to main content
System prompts set the context and behavior for your AI model. Use them to create specialized assistants, enforce response formats, or constrain the model’s persona.

Basic Usage

final result = await RunAnywhere.generate(
  'Write a function to sort an array',
  options: LLMGenerationOptions(
    maxTokens: 300,
    systemPrompt: 'You are a helpful coding assistant. Write clean, well-commented code.',
  ),
);

Common Patterns

Coding Assistant

const codingPrompt = '''
You are an expert programmer. Follow these rules:
- Write clean, readable code with comments
- Use best practices and design patterns
- Explain your approach briefly before the code
- Handle edge cases
''';

final result = await RunAnywhere.generate(
  'Implement a binary search in Dart',
  options: LLMGenerationOptions(
    systemPrompt: codingPrompt,
    maxTokens: 400,
  ),
);

Customer Support Bot

const supportPrompt = '''
You are a friendly customer support agent for TechCorp.
- Be helpful and empathetic
- Keep responses concise (2-3 sentences)
- If you don't know something, say so
- Never make up information about products
''';

JSON Output

const jsonPrompt = '''
You are a data extraction assistant.
Always respond with valid JSON only, no additional text.
Extract the requested information into the specified format.
''';

final result = await RunAnywhere.generate(
  'Extract: "John Smith, 25 years old, engineer at Google"',
  options: LLMGenerationOptions(
    systemPrompt: jsonPrompt,
    maxTokens: 100,
    temperature: 0.1,  // Lower temperature for consistent format
  ),
);

// Parse the JSON response
final data = jsonDecode(result.text);

Character/Persona

const personaPrompt = '''
You are Captain Nova, a space explorer from the year 3000.
- Speak with enthusiasm about space and technology
- Reference your adventures across the galaxy
- Use occasional space-themed expressions
- Stay in character at all times
''';

Tips for Effective System Prompts

Vague prompts lead to inconsistent results. Specify exactly what you want: ❌ “Be helpful” ✅ “Provide step-by-step instructions. Use bullet points. Keep each step under 20 words.”
Show the model what you expect: Format responses like this example: Q: What is 2+2? A: **4** — Two plus two equals four.
Tell the model what NOT to do: - Do not provide medical advice - Do not generate harmful content - If asked about topics outside your expertise, politely decline
Shorter prompts use fewer tokens and leave more room for generation: ❌ Long, rambling instructions with repetition ✅ Clear, bullet-pointed rules

Reusable Prompt Class

Create a helper class for managing prompts:
class SystemPrompts {
  static const String coding = '''
You are an expert programmer. Write clean, well-documented code.
''';

  static const String creative = '''
You are a creative writing assistant. Be imaginative and engaging.
''';

  static String forDomain(String domain) => '''
You are an expert in $domain. Provide accurate, helpful information.
''';
}

// Usage
final result = await RunAnywhere.generate(
  prompt,
  options: LLMGenerationOptions(
    systemPrompt: SystemPrompts.coding,
  ),
);

See Also