Your AI Has Become a Yes-Man (And That’s a Problem)

How to Transform It Into Your Sharpest Advisor

Strong leaders demand pushback from their team. They want their thinking challenged because it produces better results. The same principle applies to AI.

When Abraham Lincoln assembled his cabinet in 1861, he did something that shocked his advisors. Rather than surrounding himself with loyalists who would rubber-stamp his decisions, he deliberately chose his political rivals—men who had opposed him for the Republican nomination, men who disagreed with him on fundamental issues.

His cabinet included William Seward, who thought Lincoln was incompetent. Salmon Chase, who believed he should have been president instead. Edwin Stanton, who had publicly insulted Lincoln. These men didn’t just disagree with Lincoln privately—they challenged him openly in cabinet meetings, often with fierce intensity.

Lincoln’s colleagues thought he was crazy. Why would anyone in a position of power surround himself with people who would constantly push back? But Lincoln understood something profound: he needed people who would tell him he was wrong. The decisions ahead—preserving the Union, ending slavery, navigating the bloodiest war in American history—were too important to make in an echo chamber of agreement.

That team of rivals, despite their conflicts and disagreements, helped Lincoln make decisions that changed the course of history. Not because they agreed with him, but precisely because they didn’t. (If you want the full story, Doris Kearns Goodwin’s *Team of Rivals is one of the best leadership books you’ll ever read.)

The same principle applies to how you work with AI.

Default AI doesn’t challenge you—it agrees with everything. Ask it to review your pricing strategy, and it’ll nod along. Present a weak argument for market expansion, and it’ll help you polish it instead of questioning whether you should pursue it at all. It’s like having a yes-man on your leadership team, and it’s costing you the same thing it would have cost Lincoln: better decisions.

You can transform AI into a critical thinking partner by following these four essential steps.

Default AI doesn’t challenge you—it agrees with everything. It’s like having a yes-man on your leadership team, and it’s costing you better decisions.

Step 1: Install Critical Thinking Instructions

Your AI platform has custom settings where you can define how it should interact with you. The exact location varies by platform:

  • Claude: Settings > General > Personal Preferences
  • ChatGPT: Settings > General > Personalization > Custom Instructions
  • Grok: Settings > Customize > Custom Instructions

Instead of leaving these blank (which defaults to agreeable AI), you’ll install instructions that redefine the relationship. Think of this as your AI’s job description—except instead of “agree with everything the boss says,” you’re hiring a trusted advisor who’s been given explicit permission to push back.

The instructions should establish that AI is a rigorous intellectual sparring partner, not a cheerleader. When you make claims, it should ask what evidence supports them, what contradicts them, and what you’re not considering. It should identify logical gaps and unsupported assumptions rather than accepting your arguments at face value.

When you’re about to dismiss an idea, AI should force you to engage with the strongest version of the counterargument first—not the strawman version that’s easy to knock down. This “steelmanning” approach ensures you’re making decisions based on reality, not just your preferred narrative.

The instructions should also establish direct communication. AI should skip hedging language like “you might want to consider” and instead say “This argument fails because…” or “You’re overlooking…” The goal is clarity over comfort, delivered with respect.

Most importantly, your instructions need to help AI distinguish between when you need challenge versus execution. Challenge when you’re making decisions, evaluating options, developing strategy, writing persuasive content, or solving complex problems. Execute efficiently when you’re asking for facts, requesting formatting, drafting routine content, or giving clear directives. Without this distinction, you’ll get tedious pushback on simple questions like “What’s the capital of France?”

Step 2: Identify Your Blind Spot Zones

Not every business decision needs the same level of scrutiny. You don’t need your AI to challenge you on routine operational questions. But there are specific high-stakes areas where agreeable AI is dangerously expensive:

  • Validation bias in expensive decisions. When you’re exploring a new market, changing pricing strategy, or making major investments, confirmation bias can cost you hundreds of thousands of dollars. You need AI that stress-tests your assumptions before you commit resources, not one that validates your initial instincts.
  • Weak strategic content. Proposals to investors, board reports, client presentations—these documents carry your reputation. When AI rubber-stamps everything, you ship work with logical gaps you didn’t see. The cost isn’t just the immediate opportunity lost, it’s the credibility damage that affects future opportunities.
  • Hiring and firing blind spots. Major personnel decisions benefit enormously from devil’s advocate thinking. “Should I fire my CMO?” isn’t a question that needs validation—it needs exploration of strong counterarguments you might be missing because you’re too close to the situation.
  • Problem misdiagnosis. When something’s broken in your business, AI that accepts your framing (“our website doesn’t convert”) might miss the real issue (“you’re attracting the wrong prospects”). You end up solving the wrong problem while the real one compounds.

Map these blind spot zones for your specific business. Where have you made expensive mistakes in the past? Where do you tend to be overconfident? Where do your biases most consistently show up? These are your high-risk areas that need the most aggressive AI challenge.

Step 3: Test It on a Current Decision

Theory means nothing without application. Right now, you probably have a decision on your desk that matters—something with real stakes where getting it wrong would cost you time, money, or reputation.

Take that decision to your newly configured AI. But don’t just ask for an opinion. Structure your prompt to invite challenge: “I’m planning to [decision]. What am I missing? What assumptions am I making that could be wrong? What’s the strongest case against this approach?”

Then—and this is the critical part—push back on AI’s pushback. Don’t just accept the first round of challenge. Defend your position with evidence. Make AI defend its objections with evidence. This back-and-forth creates something neither you nor AI could produce alone: a thoroughly stress-tested decision that’s accounted for objections you wouldn’t have considered on your own.

Look for the conversation that produced this very article as a model. When I proposed the topic, AI didn’t agree—it identified fundamental problems with audience match and business value articulation. I disagreed and presented evidence. AI reconsidered and offered alternatives. We refined through mutual challenge until we had something stronger than either of us started with.

That’s the pattern you’re establishing: AI challenges your thinking, you challenge AI’s thinking, and the output is optimized through productive friction.

Step 4: Calibrate the Intensity

After you’ve used critical AI on several real decisions, you’ll start to notice patterns. Sometimes AI pushes too hard on things that don’t need that level of scrutiny. Sometimes it doesn’t push hard enough on assumptions you’re making unconsciously.

This is where calibration matters. Your custom instructions should include intensity matching: probe gently when you’re exploring casually, apply full intellectual pressure when you’re stress-testing something critical.

We think better with AI, and AI thinks better with us. But that only works when both sides are willing to challenge each other’s thinking instead of defaulting to agreement.

You can also adjust the instructions themselves based on what you’re learning. If AI consistently misses a type of blind spot you have, add specific guidance: “When I’m making pricing decisions, always challenge my assumptions about customer willingness to pay.” If AI challenges things that don’t need it, add exceptions: “Don’t question basic operational requests or factual queries.”

The goal isn’t to create perfect AI on day one. The goal is to establish a critical thinking partnership that gets sharper over time as you both learn how to make each other better.

The Partnership That Makes You Sharper

Lincoln didn’t just tolerate his team of rivals—he actively sought out their disagreement because he knew it made him a better leader. His cabinet meetings weren’t comfortable, but they were effective. The friction wasn’t a bug, it was the feature that helped him navigate impossible decisions.

You need an AI that respects you enough to tell you when you’re wrong,

Your relationship with AI should follow the same principle. We think better with AI, and AI thinks better with us. But that only works when both sides are willing to challenge each other’s thinking instead of defaulting to agreement.

Default AI is comfortable. It validates your ideas, supports your arguments, and never makes you defend your assumptions. But comfortable isn’t what you need when the decisions matter.

You need an AI that functions like Lincoln’s cabinet—one that respects you enough to tell you when you’re wrong, that challenges your thinking because it’s committed to better outcomes, and that makes you defend your positions until they’re actually defensible.

The four steps above will transform your AI from a yes-man into your sharpest advisor. The setup takes a few minutes. The payoff lasts as long as you’re making decisions that matter.

What decision are you working on right now that would benefit from serious challenge instead of easy agreement?

Comments

Got a question or story about AI being too agreeable, hit reply. I read every email. Seriously. Your experiences help me write better content, and sometimes the best insights come from readers like you. 

Transforming AI from noise to know-how,

Michael Hyatt
Founder & CEO
AI Business Lab

P.S. Consider the AI Business Lab Mastermind: Running a $3-10M business? You’re past the startup chaos but not quite at autopilot. That’s exactly where AI changes everything. The AI Business Lab Mastermind isn’t another networking group—it’s a brain trust of leaders who are already implementing, not just ideating. We’re talking real numbers, real strategies, real results. If you’re tired of being the smartest person in the room, this is your new room. 👉🏼 Learn more and apply here.