When GPT-5 arrived, the headlines focused on performance benchmarks: bigger context window, improved factual accuracy, and more controllable reasoning. Impressive, yes. But the real shift is less obvious.

If you are still prompting it like GPT-4, you are leaving most of its capability on the table.

GPT-5 is not just faster or “smarter”. It is designed to reason in stages, research natively, and deliver in multiple formats without you bolting on extra tools. That changes how you should talk to it. Prompting is now closer to managing a capable assistant than firing off search queries.

What’s new in GPT-5

The major change is orchestration. GPT-5 introduces dynamic model routing so different parts of your request can be handled by specialised sub-models. You can influence that routing with parameters such as reasoning_effort and verbosity. GPT-5 also consolidates the experience into a single entry point rather than multiple model labels.

Expectations vs reality

When people heard GPT-5 was coming, many assumed it would be the next step towards a unified “does it all” model, with one brain handling text, images, audio, and everything else in a single sweep. Instead, GPT-5 went for orchestration. It routes parts of your request to specialised sub-models, then stitches the results together. That is often faster and more accurate, but it is also a pivot away from the “one giant model” trajectory. You could call it a step sideways from AGI, a recognition that no single model is the best at every task.

What makes GPT-5 different?

Cog in a head icon

Deeper, controllable reasoning

GPT-5 can plan before it speaks. Using the reasoning_effort parameter, you can control how much internal work it does before answering, from minimal for quick tasks to high for strategic analysis or complex problem solving. This is silent planning; you will not see every step unless you ask, but it is there.

Icon

Context awareness at scale

In the API, GPT-5 supports up to 400K tokens of context. That is enough for dozens of documents, full reports, or lengthy transcripts. It can hold and cross-reference all of it without constant re-uploads.

Icon

Built-in research tools

Web search is native. No plugins needed. Ask for recent, credible sources, and GPT-5 can fetch them, compare their insights, and highlight contradictions.

Icon

Self-reflection support

You can prompt GPT-5 to critique itself before delivering an answer: flag weak assumptions, tighten logic, and reduce the risk of hallucination.

Icon

Multi-format workflows

In one request, GPT-5 can output an executive summary, a decision matrix, and chart code you can render. This is perfect for reports and presentations.

Icon

Dynamic model routing

Internally, GPT-5 can decide which specialised sub-model should handle your task. You can guide that behaviour with reasoning_effort and verbosity settings to trade off speed, depth, and detail.

Two neon-outlined speech bubbles appear over a background of colorful computer code. The left bubble displays “AI” and the right shows “...”. The scene evokes a digital conversation, with the AI bubble symbolising modern prompting in tech environments.

Quick before-and-after example

GPT-4 style

“Summarise this report.”

GPT-5 optimised

“Plan your approach first. Then give me an executive summary, a 2×2 decision matrix, and a chart-ready table. Use high reasoning effort. Run a brief self-reflection and flag weak assumptions.”

The difference in depth and quality is immediate.

5 prompting techniques to maximise GPT-5

Trigger reasoning modeIf you want GPT-5 to plan before answering, tell it explicitly“Think in stages before answering. Draft a plan, review it internally, then respond.”
Leverage long context Feed it the full set of materials. (ideal for research, legal analysis, or multi-document synthesis. In the API, use the previous_response_id field to persist reasoning across turns without re-sending everything) “Here’s 40 pages of material. Analyse all of it and integrate as context.”
Activate built-in research Cut the middleman and let it handle source gathering“Search for five recent, credible sources. Compare insights. Flag contradictions.”
Make it self-reflect Ask it to challenge itself“Run an internal self-reflection pass. Challenge your own logic. Flag weak assumptions.”
Demand multi-format outputs Be specific about formats“Give me an exec summary, decision matrix, and a chart.”

Advanced workflow management

Agentic eagerness

Treat “eagerness” as how far GPT-5 will go beyond your direct instructions:

“Set eagerness low for this quick analysis.”
“Use high eagerness, explore all relevant angles.”

  • Low eagerness: fast answers, minimal tool calls, no unnecessary exploration.
  • High eagerness: deep research, multiple angles explored, persistence over longer workflows.

Tool preambles and planning

For agent-style workflows, ask it to explain its plan before doing anything:

“Before taking action, explain your step-by-step plan.”

This improves transparency and catches misunderstandings early.

Common pitfalls to avoid

Icon

Contradictory instructions

GPT-5 will waste reasoning cycles trying to reconcile them

Magnifying glass icon

Being vague

Be explicit and scoped in what you want

Speech icons

Unstructured prompts

For complex requests, use a clearly structured format (bullet points, numbered steps, labelled sections). Advanced users might consider structured XML-style too

Good vs bad: average user and enterprise

For average users:

  • Good: one unified model surface reduces confusion and “which model do I pick” paralysis
  • Bad: retiring older model labels can disrupt habits and any lightweight workflows that relied on specific versions

For enterprises:

  • Good: predictable token pricing at GPT-5 tiers is easier to plan, and the router can improve throughput on mixed workloads
  • Bad: removal of legacy models can create build dependencies and require code or policy updates; dynamic routing should be monitored for auditability and compliance
  • Neutral (but watch closely): Dynamic routing is powerful, but requires monitoring to ensure it meets compliance and auditability needs

The mindset shift

With GPT-4, prompting was often about “just enough clarity” to get a coherent answer. With GPT-5, prompting becomes strategic. You are not just getting a response, you are managing an active, reasoning agent.

You decide how deep it should go, how sceptical it should be of itself, how wide its research net should spread, and how it should package the result.

Prompting is no longer a throwaway skill. It is part of the workflow design.

Conclusion: experiment, iterate, optimise

GPT-5 is smarter, more steerable, more introspective, and far more versatile.

If you prompt it like GPT-4, you will still get decent results, but you will be leaving the real performance on the table. The gains now come from knowing when to trigger reasoning mode, when to use long-context capabilities, when to make it self-reflect, and how to manage its autonomy.

It is also fair to say that some changes are great for everyday users, while others add friction for teams with existing dependencies. That is the reality of a platform shift: simpler on the surface, trickier under the hood.

Treat GPT-5 like a capable collaborator, one that can think, plan, research, and deliver in multiple formats. Your job is to set the scope, calibrate the depth, and make its strengths work for you.

Experiment, iterate, optimise. The leap with GPT-5 is not in what it knows, but in how you can make it think.