• the master
  • Posts
  • What LLMs Can’t Do (Yet) [4 Biggest Myths]

What LLMs Can’t Do (Yet) [4 Biggest Myths]

AI looks like magic, but it’s not mind-reading, reasoning, or remembering. 4 dangerous myths about what LLMs can’t do (yet).

LLMs look intelligent, but they don’t reason, remember, or plan like humans. Understanding their limitations is the key to using AI effectively and safely. Today’s newsletter breaks down what LLMs can’t do.

New Cohort of AI Engineer HQ Starting on 3rd September 2025, 8:30 PM IST. Register Here(starting soon).

In today’s edition:

  • Signal vs Noise— What LLMs Can’t Do (Yet) [4 Biggest Myths]

  • Build Together— Here’s How I Can Help You

The Elite - AI Leadership Accelerator for C-suite.

From AI ambition to actionable enterprise strategy. Build your second brain for Leading AI Products/Projects.

1:1 Coaching Program

[Signal vs Noise]

What LLMs Can’t Do (Yet) [4 Biggest Myths]

This week, let’s talk about what AI can’t do.

You ask ChatGPT, Claude, or Gemini a question and a perfect answer appears in seconds.

It can write an email, summarize a book, or even create a simple piece of code.

It feels like we’re talking to a super-intelligent brain.

That feeling is the noise.

The hype around AI has convinced many that these tools can reason, plan, and remember just like we do.

  • founders are building companies on it

  • businesses are spending millions on it

But they can't. Not really.

Large Language Models are expert guessers, not thinkers.

Understanding this difference is the single most important skill for anyone using AI today.

Let's break down the 4 biggest myths.

[Myth - 1] LLMs Can Perform True Reasoning

You can give an AI a complex problem, and it often gives you the right answer.

It appears to understand the logic.

But,

It’s just matching patterns.

LLMs have been trained on vast amounts of text from the internet (fine-web, for example).

When you ask a question, it predicts the most likely sequence of words for an answer.

It doesn’t truly understand the why.

These tools can fail on simple logic if the problem is worded in a new way.

For high-stakes work like legal analysis or medical advice, relying on this kind of reasoning is dangerous.

LLMs exhibit shallow and inconsistent reasoning, especially with chain-of-thought prompting and fine-tuning.

[Myth - 2] LLMs Have Reliable Long-Term Memory

You can have long, ongoing conversations with an AI.

It seems to remember what you talked about earlier.

Its memory is short and unreliable.

Think of it like a notepad, not a brain. It can look back at the recent conversation, but it quickly gets lost.

Most AI tools don’t have persistent memory unless explicitly designed to do so.

Even with a massive context window (its short-term memory), AIs often forget key details you mentioned in the middle of a conversation.

This is called the "lost-in-the-middle" problem [Research Paper]

They also can't form long-term memories of you or your projects without special, clunky workarounds.

You can't trust an AI to manage a long-term project or remember your preferences consistently.

It will forget instructions and make compounding errors over time.

[Myth - 3] LLMs Excel at Planning and Multi-Step Tasks

You see demos of AI agents that can book a flight, do research, and organize the results into a presentation, all by themselves.

In the real world, these multi-step tasks fail constantly.

When you ask multiple AIs to work together, they get confused.

They disobey instructions, lose track of the goal, or get stuck in loops.

It frequently fails at complex or multi-step tasks

For now, AI can handle simple, single steps, but complex projects still need a human in charge.

[Myth - 4] LLMs Can Verify Truth & Avoid Hallucinations

You can ask an AI for facts, and it will give you a confident, well-written answer.

It looks like a super-powered search engine.

AI does not have a concept of truth.

It only knows what words are likely to follow other words.

This is why it hallucinates, it makes things up with complete confidence.

Lawyers have been fined by judges for submitting legal briefs written by AI that included citations to completely fake cases.

They trusted the AI was a research tool, but it was just inventing plausible-sounding text.

You can never take an AI's answer as fact without checking it yourself.

It is not an authority.

It’s a text generator.

Takeaway

AI is a powerful tool.

But it’s a tool with hard limits.

It's more like a sophisticated parrot than an all-knowing oracle.

The real advantage isn’t just using AI, but knowing how to use it.

Understand its weaknesses.

Use it for brainstorming and first drafts, not for fact-checking or complex project management.

Don't be fooled by the noise of what AI seems to be.

Focus on the signal of what it is -

a powerful pattern-matcher that needs a smart human to guide it.

Until next time.

Signal vs Noise

This is the series in which I take the single AI topic of the week and deliver my sharp and original analysis.

If you want my analysis on any topic, reply your topic to this email.

Want to work together? Here’s How I Can Help You

I use BeeHiiv to send this newsletter.

How satisfied are you with today's Newsletter?

This will help me serve you better

Login or Subscribe to participate in polls.

Starting a new cohort of AI Engineer HQ on September 3rd, 2025 [8:30 PM IST]

Reply

or to participate.