My first prompt to Claude Opus was embarrassingly bad: “Write code for my app.”
The response was 200 lines of Python for a generic todo app, when I actually needed a React component for user authentication. Complete waste of time and tokens.
So I rewrote it: “Write a React authentication component using Firebase Auth. Include email/password login, error handling, and loading states. Use TypeScript with functional components and hooks.”
Got a perfect result on the first try.
The difference between a bad prompt and a great prompt comes down to specificity. Bad prompts get you generic garbage that you can’t use, while great prompts get you exactly what you need. Here’s everything I’ve learned about crafting prompts that actually work.
Table of contents
Open Table of contents
The Anatomy of a Great Prompt
1. Clear Objective: Don’t say “Write about climate change.” Instead, be specific: “Write a 500-word summary of the three most significant climate change impacts on ocean ecosystems, with real examples and current solutions being implemented.”
2. Context: Give the AI information about your audience. Something like: “This is for C-suite executives with limited technical background who need to understand how generative AI will impact manufacturing operations.”
3. Role Definition: Assign a persona that shapes the response: “You are a senior cloud security architect with 15 years of experience in financial services. Review this AWS infrastructure and prioritize vulnerabilities by severity and compliance requirements.”
4. Format Specification: Be explicit about structure: “Provide: 1) Sentiment summary in 3 sentences, 2) Key themes as bullets, 3) Recommendations numbered and prioritized, 4) Risk assessment with Low/Med/High rating plus explanation.”
The Six Principles of Effective Prompts
1. Specificity: Don’t say “improve my code.” Instead, be specific: “Optimize this JavaScript function that processes 1 million user records. Identify the bottlenecks and show me the before and after code with Big O complexity analysis.”
2. Structure: Well-structured prompts produce well-structured responses. Try: “Evaluate this proposal. For each criterion (market viability, financial feasibility, competitive advantage, implementation timeline), provide an assessment, supporting evidence, and specific recommendations.”
3. Constraints: Use constraints to prevent the AI from rambling. For example: “Explain quantum computing using 3 everyday analogies, no equations, 1 practical application, under 300 words, and end with a thought-provoking question.”
4. Examples: Show the AI exactly what you want instead of just describing it. Provide a format example, then say “Now convert these requirements using this format.”
5. Iteration: Your first prompt will never be perfect, and that’s okay. I once asked for an article summary and got a rambling 500-word response. Then I added “5 bullets, technical audience” but it was still inconsistent. Finally I specified: “Main Thesis (1 sentence), Key Arguments (3 bullets), Implications (1 bullet) for distributed systems audience.” Took three iterations to get it right, which is completely normal.
6. Precision: Don’t say “make this better.” Be precise about what you want: “Refactor this code to improve readability with descriptive variable names, improve performance by reducing complexity, and improve maintainability with inline comments. Preserve all existing functionality.”
Advanced Techniques
Chain-of-Thought: Ask the AI to show its work by saying something like: “Solve this step by step and explain your reasoning at each stage. Show all your work.”
Negative Instructions: Tell the AI what to avoid as well as what to include: “Write a product description that includes sound quality, battery life, comfort, and price. Avoid technical jargon, marketing fluff, competitor comparisons, and emojis.”
Conditional Logic: Set up different response paths based on different scenarios: “If it’s a syntax error, provide the location, explanation, and corrected code. If it’s a runtime error, explain the cause, provide debug steps, and suggest prevention. If it’s a logic error, trace the execution, identify where it diverges from expected behavior, and suggest test cases.”
Meta-Prompting: Ask the AI to help you write better prompts: “What information should I include in my prompt to get comprehensive API documentation that covers all the important aspects?”
Common Mistakes to Avoid
1. Assuming Context: Don’t just say “Fix the bug.” Be specific: “This is a Node.js Express API endpoint that throws a 500 error when users enter special characters. Expected behavior: sanitize the input and return a 400 error with a helpful message. Node 20, Express 4.18. Fix the bug and explain the security vulnerability.”
2. Multiple Unrelated Questions: I once asked “Explain Docker, Kubernetes, database optimization, and microservices strategy” in one prompt and got back a 2000-word essay that touched on nothing in depth. Stick to one question per prompt.
3. Unclear Success Criteria: Don’t say “write better code.” Instead, be explicit: “Refactor this code to achieve: cyclomatic complexity under 10, test coverage above 80%, all functions under 50 lines, and DRY principles applied. Show me the metrics before and after.”
4. Information Overload: If you need to provide a lot of context, structure it clearly: “Background (2 sentences), Current Situation (3 bullets), Question.”
5. Ignoring Token Limits: Really long prompts get truncated and you lose important information. Be concise and focused.
Testing and Measuring Prompt Quality
Test your prompts on multiple criteria: Consistency (run it multiple times to see if you get similar results), Completeness (does it address all aspects of what you asked), Accuracy (verify any facts or claims), Usefulness (can you use the output without heavy modification), and Efficiency (are you getting the same result with fewer tokens).
Building a Prompt Library
I keep a prompts/ directory full of markdown files with prompts that actually work. For each one, I note the use case, success rate, and any edge cases I’ve discovered. It’s boring work, but it saves me hours every single week. I’ve used some of these prompts hundreds of times.
Practical Examples
Data Analysis: Don’t say “Analyze this data.” Instead, be specific: “Analyze our Q3 2024 sales data and provide: 1) Three key trends with percentages, 2) Any anomalies or outliers with explanations, 3) Q4 forecasts based on the patterns, 4) Three prioritized action recommendations. Audience is sales leadership. Format as an executive summary.”
Content Creation: Don’t say “Write a blog post about AI.” Get specific: “Write a 150-word introduction on how transformer architecture has impacted natural language processing, for a technical audience. Start with a surprising statistic or question as a hook. Provide context on what transformers are in 2 sentences. Explain the value proposition - what readers will learn. Tone should be authoritative but accessible. Naturally incorporate ‘transformer architecture’ and ‘NLP’ for SEO.”
Problem Solving: Don’t say “Help me debug this.” Be precise: “Debug this JavaScript median function. Current behavior: gives incorrect results for even-length arrays. Expected: handles all array lengths correctly. Failing test: [1,2,3,4] should return 2.5 but returns 2. Identify the bug, explain why it’s happening, provide corrected code, and suggest additional test cases to prevent regression.”
Conclusion
The principles are simple: Be specific. Saying “write code” gets you nothing useful, but “React auth component with Firebase, email/password login, error handling, loading states, TypeScript, and hooks” gets you exactly what you need.
Provide context. Tell the AI about your audience, your constraints, and the format you want. Don’t make it guess.
Iterate. Your first prompt will suck, and that’s fine. Fix it based on the results you get.
Keep a folder of prompts that work, then copy, modify, and reuse them. This alone will save you hours.
The biggest mistake I see is people writing vague prompts, getting bad results, and blaming the AI. But the AI did exactly what you asked - you just asked for the wrong thing.
My personal rule: if the output isn’t what I wanted, it’s my prompt’s fault. I rewrite it and try again.
Great prompts aren’t fancy or complicated. They’re just clear, specific, and well-structured.
Start with the worst prompt that could possibly work, see what breaks, fix it, and repeat until you get consistent results. Then save that prompt and reuse it forever.
The best prompt is always the simplest one that works.