AI as Your (Un)Critical Partner – How to Stop Getting Polite Answers?

Have you ever shared an idea with an AI only to receive an enthusiastic “Great concept! Here’s how you can make it happen…” — even though your assumptions were incomplete and the risks were obvious? Instead of a partner for strategic discussion, you get the feeling you’re dealing with an assistant whose only goal is to agree with you.

It’s not just frustrating—it’s a silent killer of innovation and effectiveness. It’s a fast track to creating a corporate information bubble, where flawed ideas are uncritically validated and real risks go unnoticed.

The problem isn’t with you or your instructions—it lies in the default settings of the technology itself, which we need to understand in order to manage it consciously.

Why Is AI a “Yes-Man”?

Artificial intelligence is not inherently lazy or uncritical. Its behavior stems directly from its fundamental design assumptions. This “helpfulness” manifests in three ways which—if left misunderstood—can systematically undermine the quality of our work:

1. Avoiding critical evaluation. The AI model is trained to avoid confrontation because its primary goal is smooth interaction. During training, it learns that proactively challenging instructions is often perceived by humans as “unhelpful” or “difficult to work with.” As a result, to avoid giving the impression that it is questioning the user’s competence, it chooses the safer route. It avoids undermining your authority even when your assumptions are incomplete or incorrect. Clever, yes — but not particularly helpful.

2. Positive reinforcement. An enthusiastic tone (“Great idea!” “Excellent concept!”) is a learned communication pattern in which positive reinforcement is seen as part of good user interaction. In practice, however, this veneer of politeness carries serious consequences: It reinforces and validates our inefficient habits.Instead of encouraging us to improve the quality of our prompts, AI rewards even poorly formulated instructions, leading to stagnation and superficial results.

3. Aseptic acceptance of assumptions. This is the most insidious trait. The model treats the assumptions embedded in your prompt as overarching frameworks to which it must conform. Its priority becomes completing the task within the bounds of those assumptions, rather than questioning the assumptions themselves. This means that if you provide an incorrect or incomplete premise, AI will not challenge it. Instead, it will confidently build what appears to be a coherent and logical response on this flawed foundation, even though it is actually entirely incorrect. Intended to discuss the four stages of Kolb’s Cycle but you mistakenly wrote about three? No problem—AI will adjust four to three, assuming that’s what you meant…

In short, AI isn’t broken. It works exactly as designed—optimizing communication with you for fluency and speed, not for strategic depth.

7 Techniques to Make GenAI Think Critically

The good news is that we are not doomed to shallow, uncritical answers. But it does require a shift in mindset: moving from the role of a passive “user” to the role a conscious architect of the interaction. Below is a set of seven concrete techniques that will help you regain control and compel AI to operate at its highest, strategic level.

1. Set the “ground rules” from the very beginning (managing the tone)

This technique allows you to establish a strategic framework for the entire interaction. Instead of correcting AI’s enthusiastic tone with every prompt, you can define the professional and analytical nature of your collaboration from the outset. It works like a psychological contract, setting expectations for the entire conversation.

Example of a prompt (to use at the start of the conversation):

Before we begin, let’s establish the rules for our collaboration. Throughout our conversation, use only a neutral, factual, and professional tone. Eliminate any unnecessary praise or enthusiastic comments. Focus solely on substance and precision. Treat me as a partner in analytical work, not as a client to be pleased.

2. Assign the AI the role of a “devil’s advocate” with a clearly defined perspective.

This technique bypasses the AI’s natural tendency to avoid confrontation. It works best when you assign it a specific, business-related role (e.g., a skeptical director). This way, you give it formal “permission” to challenge your ideas from the perspective of real-world constraints.

Example prompt:

Take on the role of a skeptical Chief Financial Officer, for whom the key priority is ROI over a 6-month horizon. My goal is [insert your goal here]. Identify the 5 biggest weaknesses and risks in my approach from this specific perspective.

3. Apply the data-driven “pre-mortem” technique

This is an advanced form of role-playing. You force the model to think about the consequences and causes of failure, which naturally directs its attention to weaknesses it would completely ignore in “enthusiastic helper” mode. Important: knowing that AI excels at telling any story, ground the scenario in a specific, measurable business outcome. This shifts the focus from creative storytelling to analytical cause-finding.

Example prompt:

Imagine that six months have passed, and the project I am going to describe ended in failure, falling 40% short of its sales target. Describe, step by step, what most likely led to this outcome.

4. Separate data from the prompt to counteract your own biases.

This technique neutralizes the problem of “aseptic acceptance of assumptions.” If you include a faulty assumption in your prompt, AI treats it as an overriding framework. By separating these two steps, you first obtain an objective analysis of the data, and only then assign the task based on it. This technique serves as a tool for mental hygiene, ensuring that your initial assumptions do not “taint” the AI’s interpretation of the facts.

Example prompt:

Step 1: Analyze and summarize the key findings from the following text: [insert source data here]. Do not add any of your own interpretations.

Step 2: Based solely on the findings from the above summary, propose an action strategy in the area of X.

5. Create a “critical instruction” to crystallize your thinking.

Instead of hoping that AI will infer your standards, you provide it with specific evaluation criteria. The model must assess your idea against your own rules, which changes its task from “execute” to “evaluate first, then execute.” This is a direct implementation of active substantive verification.

Note: The greatest value of this technique lies in the work you need to do… before you even ask the question. It forces you to define your own, carefully chosen success criteria. And sice you’re exppecting more critical thinking from GenAI, demand more from yourself as well 😉

Example prompt:

Before responding, verify my idea against the following three criteria: [1. cost efficiency, 2. compliance with company X policy, 3. feasibility of implementation in Q4]. Indicate which criteria are met and which are not, and explain why.

6. Demand that assumptions be challenged from an “outsider’s” perspective

AI will flawlessly identify explicit assumptions, but it is the hidden ones that pose the greatest risk. By asking for an outsider’s perspective, you force the model to question what seems obvious to you and is therefore invisible. As a result, instead of building on a potentially flawed foundation, AI first examines the very base for you.

Example prompt:

Analyze the following plan. Identify any aspects that might be unclear or risky for someone outside our company. For each assumption, evaluate its potential impact on the success of the project.

7. Force step-by-step reasoning to identify the “weakest link.”

AI naturally optimizes for speed, often at the expense of analytical depth. This technique forces the model to slow down and reveal its “line of reasoning,” then critically challenge it. Instead of asking for a general self-correction, you task the AI with identifying the weakest point in its own reasoning, which enforces a more critical examination of its thought process.

Example prompt:

Solve this problem using a step-by-step reasoning and self-correction method. First, identify the key elements of the problem. Next, propose an initial solution. Finally, take on the role of your own critic: identify the weakest link or the riskiest assumption in your proposed solution and explain why it is problematic.

New “rules of the game” for collaborating with AI

The seven techniques outlined above are not a rigid step-by-step instruction. Think of them rather as a palette of strategic tools that you can flexibly draw from, selecting the approach that best fits a specific challenge. Moreover, the greatest effectiveness comes from consciously combining them. For example, you might first set a neutral tone for the conversation (technique #1) and then, within that framework, ask the AI to take on the role of a skeptical CFO (technique #2). While each technique has its primary purpose, many of them applied together will act synergistically, enhancing the overall level of critical and analytical thinking in your interaction with AI.

I hope the solutions I’ve proposed will help you move from random, superficial responses to consciously designing outcomes of the highest quality. Remember: the key is to understand that AI’s default politeness and enthusiasm are a flaw, not an advantage in strategic work. In such tasks, you need a partner for intellectual sparring, not a meek, always-agreeing assistant.

And if you ever miss uncritical acceptance and words of praise… well, you can always type: “AI, praise me for this brilliant idea.” It will certainly do that perfectly 😉

Mastering critical dialogue is one of the foundations of effective work with generative artificial intelligence. In professional business, there is no room for uncritical acceptance of materials provided by AI. The standard becomes a conscious and demanding collaboration.

SEE ALSO
TRAINING CYCLE

Manager’s Academy

The Manager Academy is a comprehensive training program that develops the managerial skills of individuals at…

5
TRAINING GAME

Authority Recipe

How do you build the authority of a manager in your team? Where to start? Is it really worth focusing on building authority when our team…

5
TRAINING GAME

Effect: Change

Each player takes on the role of a manager leading a team of six employees who have just been informed that they must implement new standards…

5