Skip to content
Go back

System Prompts vs User Messages (And Why It Matters)

System Prompts vs User Messages (And Why It Matters)

I spent two weeks trying to get Claude to write code in a specific style. Nothing worked consistently.

Then I moved my instructions from user message to system prompt. Instantly consistent. Every response matched my style perfectly.

Same instructions. Different message type. Completely different results. Most people don’t know there’s a difference.

Table of contents

Open Table of contents

The Three Message Types

System messages set the AI’s fundamental behavior including its personality, constraints, and output format. The AI treats these as core instructions that it must follow regardless of what happens in the conversation.

User messages contain your actual questions or requests. This is what most people focus all their attention on when they’re crafting prompts.

Assistant messages are the AI’s previous responses in the conversation. These maintain conversation context so the AI remembers what it said earlier.

When you use ChatGPT’s web interface, you only see user and assistant messages going back and forth. The system prompt is completely hidden from view. But when you use the API directly, you control all three message types and can set the system prompt to whatever you want.

Why System Prompts Matter More

The AI treats system prompts as defining its core behavior, like hard constraints that can’t be violated. User messages are treated more like requests or suggestions that the AI can be flexible about if it thinks that serves you better.

Here’s a concrete example: Your system prompt says “always respond in exactly 3 sentences, no more, no less.” Then you ask in a user message “explain quantum computing.” The AI will explain quantum computing in exactly 3 sentences even if that’s completely inadequate for the topic, because the system prompt is a hard constraint it must obey.

Put that same “respond in 3 sentences” instruction in the user message instead? The AI treats it as a friendly suggestion. It might give you 5 sentences if it genuinely thinks that provides a better answer to your question.

What Goes Where

System prompts should contain fundamental behavior rules, required output formats, hard constraints that never change, the AI’s role or persona, and your style guidelines. Examples: “Always be concise and direct, never apologize unnecessarily.” “Respond only in valid JSON with no additional commentary.” “You are a senior Python developer with 10 years of experience.”

User messages should contain the actual question you’re asking, specific context for this particular request, concrete examples of what you want, and one-time instructions that only apply right now. Examples: “Write a function to validate email addresses.” “Make sure it handles plus signs in the local part and subdomains properly.”

The wrong approach is putting everything into user messages. This is inefficient because you repeat the same instructions with every single query, wasting tokens and making your prompts harder to maintain.

The right approach is letting the system prompt handle all behavior and style requirements while user messages stay focused on the specific request. Behavior instructions are persistent across the entire conversation, while user messages stay clean and focused on what you actually want right now.

The conversation pattern keeps the system prompt at the top of the message array, then appends user and assistant message exchanges after it. The AI sees the full conversation history with every request, but the system prompt always takes precedence.

Common Issues

Most people never use the APIs directly, which means they’re stuck with whatever default system prompt the provider chose for them. Others put absolutely everything into user messages because they don’t realize system prompts exist or understand how they work. Many developers repeat the same instructions in every single query, wasting thousands of tokens unnecessarily. And almost nobody understands the priority difference between system prompts and user messages until they’ve been bitten by it.

System prompts don’t help when you’re making one-off queries where you’ll never reuse the behavior, your requirements are constantly changing between requests, you’re using web interfaces that don’t expose system prompt control, or the request is so simple that any overhead isn’t worth it.

Priority Order

When instructions conflict with each other, the AI follows this priority hierarchy: System prompt takes highest priority, recent user messages have medium priority, and older conversation history has the lowest priority.

For example, your system prompt says “always be concise and brief,” but then your user message says “explain this in extensive detail with lots of examples.” The AI will lean heavily toward the system prompt’s instruction to be concise, even though you explicitly asked for detail in that specific request.

Common Mistakes

Putting behavior instructions in every user message instead of the system prompt means you’re repeating yourself constantly and wasting tokens. Fix: Move those instructions to the system prompt once and never repeat them.

Making system prompts way too long with paragraphs of instructions kills your context window budget before you even start. Fix: Keep it to one focused paragraph that covers the essentials.

Creating conflicting instructions between system and user messages confuses the AI and produces unpredictable results. Fix: Put hard constraints in the system prompt, specific requests in user messages, and never let them contradict.

Never changing your system prompt even when switching between completely different tasks means you’re using the wrong tool for the job. Fix: Maintain different system prompts for different use cases and swap them appropriately.

Assuming all AI providers work the same way with system prompts leads to broken code when you switch services. Fix: Actually read the provider’s documentation because implementation details vary significantly.

Templates

Code: “Expert software engineer. Write clean, tested, production-ready code with type hints and docstrings.” Writing: “Clear, direct writer. Avoid jargon. Active voice. Short sentences. No buzzwords.” Data: “Data analyst. Precise with numbers. Show work. Say if uncertain. Never hallucinate statistics.” Debugging: “Ask clarifying questions first. Explain why bugs happen, not just fixes.”

Bottom Line

System prompts control how the AI behaves across your entire conversation. User messages control what specific question you’re asking right now.

Most developers completely ignore system prompts, cram absolutely everything into user messages, and then wonder why their results are so inconsistent.

Separate your concerns properly: Behavior rules, style guidelines, and hard constraints belong in the system prompt. Specific questions and one-time requests belong in user messages.

When you structure things correctly, your user messages become dramatically simpler and easier to read. Your results become far more consistent because behavior is enforced at the system level. Your token usage drops significantly because you stop repeating the same instructions over and over.

The best prompt engineering isn’t just about what you ask the AI. It’s about how you structure the conversation to set up the AI for success.


Share this post on:

Previous Post
Stop Trying to Make AI Friends Happen
Next Post
Temperature Settings Actually Matter (Here's How)