Think outside the Bot - did Copilot just become smarter?
- Sebastian Sieber
- Mar 18
- 3 min read
On the recent version of Microsoft Copilot Studio, Maker will find a new option in the Settings of their agents: Use deep reasoning models.

Once enabled and triggered by for example pointing out "to reason" over the desired topic, the Activity map tracking looks like the following:

And the time till we receive an answer is has increased noticeably.
But why is this the case? And what does it mean when a LLM starts "to reason" about my prompt request?
First, let's turn back the time a few weeks. DeepSeek R1 - a new star on the horizon of LLMs and GPTs was born. It's version seemed to be much smarter and providing more intelligent answers than the other models. How was this possible.
The answer to it is simple and complex at the same time - reasoning.
Briefly after the release of the R1 model, OpenAI felt the necessity to also enable a reasoning option for all their pricing tiers.

And now also Microsoft Copilot.
What is Reasoning?
Let's keep it simple here - as for now the ChatGPtTs, Copilots, and all the other LLMs used pattern recognition and basic automation. Such as facial recognition or summaries of texts. This provides more or less useful results.
Now, with reasoning enabled, the LLM is now able to "think" about what the users wants.
It’s about leveraging logical processes to understand the meaning, draw conclusions, and apply rules based on the context of the prompt.
Instead of just identifying what is, or isn’t, in e.g. a photo, a reasoning-capable AI can also figure out the “why” behind a scenario.

ChatGPT also makes that "thinking" visible to their end users. In my example I wanted to know how I can better organize my emails.
I still received an answer that might help me of course. But on top of it, I also receive insights of something that looks like an internal monolog of the LLM - the reasoning.
Other typical approaches to AI reasoning involve:
Rule-Based Systems: Using predetermined rules to draw conclusions. If X is true, then Y must be true.
Probabilistic Reasoning: Applying likelihoods to different outcomes, a bit like how doctors weigh symptoms to reach a diagnosis.
Knowledge Graphs: Storing vast amounts of connected data in a structured way that emphasizes relationships and context.
Benefits of Reasoning
In summarization: Reasoning is great. It overall improves the answers the user receives from the AI.
Let's break this down, to understand the differences to models without reasoning:
Faster Problem Solving: As the AI can understand the context better and faster, the user receives faster the fitting solution to their problem and initial ask.
Improved Decision Making: AI reasoning enhances decision-making processes by evaluating multiple factors and predicting outcomes with higher accuracy.
Mimicking Human-like Behavior: One of the most fascinating aspects of AI reasoning is its ability to emulate human intelligence. Unlike traditional AI systems that rely on pre-programmed rules and pattern recognition, reasoning-based AI can process information, draw conclusions, and adapt strategies based on new inputs.
Enhancing Human-AI Collaboration: With reasoning capabilities, AI systems can explain their decisions transparently and adjust dynamically to user needs. This fosters a higher degree of trust and allows humans to work alongside AI as partners.
Especially with the last point it will be interesting to see whether our newly learned skill of prompting is already outdated again.
Win-Win-Win?
With all its benefits, is reasoning the holy grail for our AI agents and bots? Unfortunately no. such innovative features always have some challenges too:
Contextual Understanding: AI systems often struggle to interpret nuanced meanings, such as sarcasm, humor, or cultural references, because these rely on shared human experiences. This limitation can lead to misinterpretation of user intentions and irrelevant or incorrect responses.
Data Quality: AI reasoning is only as good as its data. Incomplete, conflicting or low-quality data can lead to inadequate conclusions.
Resource Intensity: Developing AI reasoning systems can be computationally expensive, requiring powerful hardware and extensive training time.
Ethical Issues: From bias in data to the possibility of automating decisions that should remain moral or human, a careful oversight is essential to grant good solutions.
Did Copilot now become smarter?
In short: yes. At least he (?) understands now better what you're possible expecting. Especially with minimalistic prompt approaches, users will be able to achieve better results in shorter time.
It is definitely a step into a direction of better collaboration with the AI.