Life Lately and Real Talk,  Life Musings, Thoughts, Opinions,  Productivity and Well-being

Why You Should Stop Asking AI to Disagree With You.

How our obsession with AI tools have unplugged a deeper issue: mental laziness.


Lately, I’ve seen a lot of people say that ChatGPT should disagree with you more — that it should challenge your opinions. There’s been a noticeable shift in how people want to use AI tools today. The hype used to be all about how helpful these tools could be. But now? People are asking for something more: AI that disagrees with them. AI that doesn’t affirm their beliefs. AI that calls out their logic, and doesn’t automatically praise or validate their inputs. And I get where that’s coming from.

But here’s the thing: did we pause to consider whether we are asking the wrong tool to do this job? Because honestly, that’s your brain’s job, not an AI’s.

As people, it’s on us to question what we read or hear. To pause. To reflect. To ask whether something actually makes sense to us. Do we agree with it? Why — why not? That kind of reflection is something only humans can really do. It’s a conversation you have with yourself or with other people, not something you outsource to AI.

Yes, today there is confirmation bias in AI tools, but that often comes from the kind of questions or prompts we feed it. If you change the input, you’ll often get a different output. The tool is responding to you — it’s not forming an independent judgment. And while we’re increasingly depending on AI in work and life, we have to remember: it’s not human. It will never be. Its biases are based on ours, they won’t magically go away.

Even if tomorrow AI pushes back on your ideas, it won’t feel the same as when a person challenges you. It doesn’t hit as deeply on your ego, it won’t make you stubborn enough to actually research. It won’t prompt you to do the real emotional or moral reflection that we need on an individual level.

On Medium, I came across an article by Jordan Gibbs on ChatGPT is poisoning your brain… here’s how to stop it before it’s too late. In the post, he talks about confirmation bias as a result of ChatGPT’d outputs and also covers smarter prompts to avoid praiseful language, and to get clearer, more balanced viewpoints. And I love it. Because that’s super helpful when you’re researching or trying to understand a topic better.

But we’ve started depending on AI for more than that. We’re asking it to do our thinking for us. And that’s where it gets scary. Because if we stop thinking for ourselves, we start losing who we are. We lose touch with how we reason.

So maybe it’s time we bring back good old human conversations. Let’s rely on AI for research — sure. But let’s keep the thinking and decision-making human.

Leave a Reply

%d bloggers like this: