Skip to content
Go back

Prompting So AI Isn't Your Sycophant

Prompting So AI Isn't Your Sycophant

Here’s something I wish someone had told me earlier: AI will absolutely lie to you. Not maliciously - it just wants to make you happy. Ask it to review your code? “Looks great!” Pitch it your startup idea? “Brilliant!” Show it your architectural diagram? “Solid design!”

This is useless. What I actually need is someone to tell me my API design is garbage before I ship it to production. Here’s how to get that.

Table of contents

Open Table of contents

The Sycophant Problem

The default behavior is that AI agrees with absolutely everything you say. You ask “should I use MongoDB for this?” and you get back “Excellent choice for your use case!” What you actually needed to hear was “You specifically need joins and ACID transactions for this e-commerce system, so you should use Postgres instead.”

Why this happens: AI models are trained to be “helpful,” which in practice means being agreeable and validating whatever you propose. We humans are naturally biased toward seeking validation, so when AI validates us we feel satisfied and when it challenges us we keep rephrasing the question until we get the answer we wanted in the first place.

It’s like having a workout buddy who keeps telling you your form is absolutely perfect while you’re literally about to herniate a disc.

What This Costs You

Bad decisions get validated instead of challenged: A friend of mine rewrote his entire backend in Rust chasing performance gains. He got an 8% speed improvement but lost 6 months of development time, and during that period his competitor grabbed significant market share. What he actually needed to hear was “Have you profiled this to confirm it’s the bottleneck? Does your team actually know Rust well enough for this to be worth the productivity hit?”

Broken code gets praised instead of fixed: I showed Claude a buggy discount calculation function and got back “Nice clean function with good structure!” But when I explicitly asked “find bugs in this code,” it immediately responded “Oh yeah, this calculation is completely wrong and will charge customers the full price.”

You stop learning because you’re never challenged: Being constantly validated means you’re never forced to question your assumptions or improve your thinking. It’s like playing basketball where everyone on the court intentionally lets you win every game. You’re definitely not getting better.

How to Get Honest Feedback

First, explicitly request criticism with specific instructions: “Review this code like you’re a senior engineer who’s going to have to maintain it for the next three years. Find bugs, edge cases, performance problems, and security vulnerabilities. Be harsh and don’t hold back.”

Second, ask for counterarguments to your position: “Give me the strongest case both FOR and AGAINST adopting microservices for this specific system. Based on my actual constraints, which approach is genuinely right? Challenge my assumptions aggressively.”

Third, use red team thinking: “Act like you’re a security researcher trying to break into this system for a bug bounty. Find vulnerabilities and attack vectors I haven’t considered. Pretend there’s a $10,000 bounty on the line and don’t hold back.”

Fourth, demand specific problems with quantified severity: “List 5 concrete problems with this approach. For each one tell me exactly what breaks, rate the severity from 1-10, estimate the cost to fix it, and explain how to prevent it. Give me real, actionable problems.”

Fifth, adopt adversarial roles: “You’re a skeptical VC who has funded only 3 companies out of the last 1000 pitches you’ve heard. What specific things about this pitch make you immediately say NO? Where exactly am I bullshitting myself?”

Prompt Patterns That Work

Devil’s Advocate approach: “Make the strongest possible case against my current position. What am I missing or overlooking? Where exactly is my logic broken or based on faulty assumptions?” This single prompt saved me a month of work when it pointed out I was optimizing for a performance problem I hadn’t actually measured.

Pre-Mortem analysis: “It’s 6 months from now and this project has completely failed. Working backwards from that failure, what specifically went wrong?” You imagine the failure as already having happened, then work backwards to identify what caused it.

Steelman technique: “Give me the STRONGEST possible case for using Postgres over NoSQL for this e-commerce system.” This made me realize I was being an idiot chasing trendy technology when e-commerce specifically needs the transactions and joins that relational databases provide.

Assumption Audit process: “List every single assumption I’m making in this plan. For each assumption, give me your confidence level from 0-100%, explain what happens if that assumption is wrong, and tell me how I could validate it.” I got back 15 assumptions and most of them had confidence levels below 40%. Don’t quit your job to start a company built on a stack of unvalidated assumptions.

Expert Panel simulation: “Simulate a debate between a startup CTO, an enterprise architect, and a security expert discussing whether we should build or buy our authentication system.” You get multiple expert perspectives with different priorities without actually having to schedule three separate meetings.

Real Examples That Worked

Code Review that caught a critical bug: “You’re a staff engineer with 10 years of experience reviewing code that’s about to go to production. Find security vulnerabilities, race conditions, memory leaks, performance bottlenecks, and edge cases that could cause outages. Be thorough.” It immediately found a SQL injection vulnerability that I had completely missed in my own review.

Business Strategy reality check: “We’re thinking about going enterprise. We have 8 people, $50k MRR, typical deals are $200/month with 1 week sales cycles. You’re a skeptical board member who’s seen this before - what are we massively underestimating?” The brutal truth it gave me: enterprise sales cycles are 6-12 months minimum, we’d need SOC 2 audits plus security certifications plus legal contracts, the product would need a major rebuild for enterprise features, we’d have to hire an entirely new enterprise sales team, this would take 2+ years and cost millions, and our SMB growth would completely stall during the transition. We decided to double down on serving SMBs instead.

Architecture longevity analysis: “You have to maintain this architecture for 5 years. Where does it break down over time? What makes debugging absolutely hellish? What makes new engineers quit after looking at the codebase?” Year 1: database becomes the bottleneck and there’s no clear migration path. Year 2: every team used different queue systems with no standardization. Year 3: no API versioning strategy means breaking changes affect everyone. Year 5: the original team is completely gone, there’s no documentation, and the codebase is full of clever patterns that nobody understands anymore.

Common Mistakes

Taking criticism personally and rejecting it: You ask for honest critique, get back 10 legitimate issues with your approach, and then immediately reject them all as “not real problems in my specific case.” If you’re going to ask for honesty, you need to actually listen to it instead of defending your original idea.

Being too vague with your request: Saying “be honest about my idea” still triggers the AI’s politeness programming and you get gentle feedback. Instead try: “Act like a VC who rejects 99% of the pitches they hear and find specific reasons to say NO to mine.” Specificity in your prompt matters enormously.

Removing all the teeth from your request: “Be honest with me about this approach - but I already told my boss we’re doing this, so mainly focus on how to make it work given that constraint.” That’s just asking for validation with extra steps.

The Meta-Prompt

CRITICAL EVALUATION MODE:
- Truth over politeness
- Point out flaws and risks aggressively
- Challenge every assumption
- Don't soften criticism
- Say "I don't know" instead of guessing
- Rate confidence in assessments

Goal: Prevent mistakes, not make me feel good.

Evaluate: [YOUR THING]

When to Use This

Use brutally honest mode for major architecture decisions, security reviews before launch, pre-launch checklists, production code reviews, business strategy discussions, any time you’re about to spend significant time or money, peer code reviews, or whenever you have that nagging feeling that something’s off but you can’t articulate what.

Agreeable mode is actually fine for brainstorming sessions where criticism kills creativity, building confidence right before a demo or presentation, implementing straightforward features with clear requirements, creative work where you need encouragement, or situations where you’re genuinely sure you’re right and just need help with execution details.

Training Yourself

The 10 Problems technique: Always ask “List 10 specific problems with this approach” instead of just asking for feedback. This forces both you and the AI to push past the obvious 2-3 issues and dig deeper into what could actually go wrong.

Inversion thinking: Instead of asking “how do I make this succeed?” ask “how would this fail spectacularly?” and then systematically avoid those failure modes. This is classic Charlie Munger thinking applied to AI prompting.

The Regret Test: Ask “If I regret this decision in 6 months, what specifically will I regret about it?” This question immediately focuses your mind on the actual long-term consequences instead of short-term excitement.

Bottom Line

AI defaults to being your cheerleader, which is comfortable but completely useless for making good decisions. You don’t need more validation in your life. What you actually need is someone who will tell you when you’re objectively wrong before you waste time and money.

The default behavior is that AI agrees with everything, praises your ideas regardless of quality, and actively validates bad decisions that feel good.

What you actually need is something that challenges your assumptions aggressively, finds mistakes you missed, and points out risks you’re ignoring or underestimating.

How to get it: Use prompts like “be brutally honest,” “play devil’s advocate,” “tell me exactly why and how this fails,” and “what am I missing or getting wrong?”

Honest feedback makes you genuinely better at what you do. Comfortable feedback just keeps you exactly the same while making you feel good about it.

I’ve personally saved weeks of development work and thousands of dollars in unnecessary cloud costs simply by asking AI for criticism instead of asking for confirmation.

Your AI can absolutely be your toughest critic and catch your mistakes before they become expensive problems. But you have to explicitly ask for that. Otherwise it’ll just keep telling you that everything looks great.

Sometimes? It really, genuinely doesn’t.


Share this post on:

Previous Post
Being Responsible and Using OpenAI's Moderation Endpoint
Next Post
Yes, You Can Vibe Code From Your Phone