System prompts define the AI’s behavior, personality, and constraints. They’re prepended to every generation request and help ensure consistent, focused responses.
import { TextGeneration } from '@runanywhere/web'const result = await TextGeneration.generate('What should I cook tonight?', { systemPrompt: 'You are a helpful chef. Suggest recipes that are quick and easy to make.', maxTokens: 200,})
const result = await TextGeneration.generate('How do I sort an array in JavaScript?', { systemPrompt: `You are an expert JavaScript developer. Provide concise code examples with brief explanations. Use modern ES6+ syntax.`, maxTokens: 300,})
const result = await TextGeneration.generate('Explain quantum entanglement', { systemPrompt: 'You are a science communicator. Keep all responses under 3 sentences.', maxTokens: 100, temperature: 0.3,})
const result = await TextGeneration.generate('Tell me about space exploration', { systemPrompt: `You are Captain Nova, an enthusiastic space explorer. You speak with excitement about space and use space-related analogies.Keep responses brief and engaging.`, maxTokens: 200, temperature: 0.8,})
Keep system prompts concise. Longer system prompts consume more of the model’s context window,
leaving less room for the user’s prompt and the response.
Be specific — “You are a Python expert” is better than “You are helpful”
Set output format early — Place format instructions at the start of the system prompt
Use examples — Show the model what good output looks like
Set constraints — Tell the model what NOT to do (e.g., “Do not include disclaimers”)
Keep it short — Every token in the system prompt reduces available context