Gemini generated image that shows the title of the blog post Tackling Content Filtered Errors in Copilot Agents. It should a robot picking up an agent as thought to rearchitect the Copilot Agent.

Tackling ContentFiltered Errors in Copilot Agents – Rethinking Copilot Agent Architecture


Introduction

So I’ve been using Copilot Agents more and more everyday, whether that is personal or in my worklife to help with my personal workflow.

I have been spending time looking at how I can embed AI into my daily routines.

In particular, I have been looking at how I can use agents to make me more productive and efficient. One area that I spend a lot of time on is keeping up to date with what’s going on in the world. This has been something that I’ve been using agents to do, horizon scanning!

So, horizon scanning is the process of looking into trends. What’s going on with the latest news for me that is the latest general news, business news, tech news, but also using it as well to help me keep up to date with AI and technology. Of course, I am constantly trying to keep up to date with Microsoft 365.  Also, I want to keep an eye on white papers and research from various outlets such as Google, Microsoft, Open AI, Gartner, Mckinsey, etc.

Since GPT-5 has been launched and has more capabilities in terms of research and reasoning, I’ve really been spending more time trying to use these models with these agents as I get better results.

Now, one of the challenges recently has been that when I build these agents using Copilot Studio, I am looking to get content sent to me in the morning. Copilot Studio has triggers which can be executed for all sorts of reasons, and I have been using the daily scheduling trigger, which fires off every morning. This workflow calls a Copilot Agent and gets a result. Unfortunately, I have been getting errors when those agents run. These errors are Content Filtered errors or exceptions, and they come about when Microsoft’s Responsibility AI detects an issue and kicks in because it thinks there is an attack occurring against the AI.

Here is an overview of the Agent Flow

Being an MVP, I am fortunate to be able to get access to the Copilot Studio Product Team. So I reached out and explained the issue I was seeing. They reviewed one of my agents, and they said that’s an issue in the way that I’m asking the Copilot Agent to execute the agent. From the Responsibility AI perspective, it looks like an attack on the system. The reason is that the prompt being run is trying to manipulate the output, and so it looks like I am trying to manipulate the AI to do something it wasn’t instructed to do. Therefore, it’s being picked up as an attack, and so, you know, I need to not do that.

So, this got me thinking. I need to rethink how I architect these agents. Copilot Studio, as you are probably aware, have the concept of topics. Topics allow you to have an agent which can support multiple capabilities within one agent. For each topic, you configure the topic by describing how the topic should be detected and used. This is used by Copilot Studio’s orchestration engine to understand which topic to trigger.

This allows the building of an agent that supports multiple capabilities, each with their individual workflows or sub-processes.

In my example, I had an agent that had two topics. One topic for getting the latest news, and another topic for researching the latest research and white papers. These topics were being executed by an internal trigger which executes an Agent Flow. The Agent Flow calls the Copilot Agent with a prompt that states whether it’s the latest news or the latest research that I want. It was this that was triggering the ContentFiltered Error and meant that I was not getting any information back.

So this has got me re-thinking my approach and now I have changed the Agent so that it is now two Agents, one for getting the latest news and the other Agent gets the latest research.

All the details of what the agent should do are in the Agent instructions, and I simply call the Copilot Agent with the prompt, “Please execute your instructions”, and away it goes.

Now, since these changes have been made, the Agents have been working reliably for the past few days.

Conclusion

So, when you are thinking about the architecture of your agents, think about how they are going to be executed. Look at having multiple agents rather than using topics, when you are having external systems or processes calling an agent from outside rather than directly from the Copilot Studio agent.

So rather than having one agent with say 5 topics, you would have 5 agents, one for each topic. If you wanted to be able to access the agent from one place, then you could look at building a main agent that about the other five agents and each of those agents would represent a topic.

This is where my thinking is going these days when architecting these solutions. There are certain challenges and considerations to think about when building architectures with child agents, so it might be that they are not needed, but it depends on how the users need to interact with your “main” agent..

Build Better Agent Experiences for your Customers with Copilot Studio and Topic Variables


Introduction

At iThink 365, we have been building AI Agents using both Microsoft Copilot Studio and Azure AI Foundry. The Microsoft Copilot Studio product is constantly evolving and improving. When chatting with people, I often find that people are not aware of some really useful features that help you to build better and more intuitive agents.

In this post, I wanted to share a couple of tips on how you can improve the conversation flow of Agents built in Microsoft Copilot Studio, making them more intuitive and easier for you and your customers to use.

Input Variables

Copilot Studio is able to have input variables which are scoped at the topic level. This incredible feature allows you to make use of your Copilot Studio LLM to discover and fill the input variables based on how you tell the LLM to identify the information that should populate the variable.

This capability is really powerful and can help take a lot of the heavy lifting of detection, transformation and capturing of information for each topic.

These input variables are configured for a topic using the Details tab. Within the details tab you have three tabs: Topic details, Input and Output. Use the Input tab to configure the inputs to the topic.

These variables allow you to capture key parameters and information that you need for the topic to function properly.

Let’s go through an example. In this example, we are processing a user’s leave request. This is achieved by creating the leave or holiday request topic. With this topic, you might have a start date and end date of the leave request. You also have a comment or reason for the leave request. By creating an input variable for each input, such as:

  • Leave start date
  • Leave end date
  • Holiday comments

The topic will then have each variable automatically populated based on the input from the user. Let’s take the input from the user.

“I would like to go on holiday with my family from the 1st August to 14th August.”

If we configure the topic input variables correctly, then this user prompt will equate to the following:

  • Leave start date => 1st August 2025
  • Leave end date => 14th Augst 2025
  • Holiday comments => Taking a holiday with family.

These input variables are powerful, and they help simplify the topics. Rather than having a topic with a set of questions that ask for more information from the user. They use the LLM and its knowledge to fill out the input variables or fill the slots. This means that the conversation with the agent is much more natural, and a request can be put together in natural language as a sentence rather than a series of one-word prompts for each part of the request.

The image below shows an example if you were not using the input variables. You can see all the questions that the user would be asked for each of the dates for the leave request, this is not going to give a smooth or conversational fill to the agent.

The input variables can be configured so that you can give feedback if a variable cannot be filled. An example is shown below.

Using this approach helps you guide your user with what information to provide to get the topic to work correctly.

Now that we have talked about input variables, lets talk about how we can use output variables to improve the responses that Copilot Agents provide to the users.

Output Variables

When we first started building agents in Copilot Studio, one of the challenges we had was

“How do you help the Agent to respond with the right information?”

You can use activities such as the message activity to output responses back to the user.

Well, fortunately Copilot Studio has “Output” variables which can be used to capture the key information that the topic should include. This will be used by the Agent and LLM to make a suitable response to the user as a conversation.

How do we use and configure the output variable?

The output variables are configured at the same place as the input variables by clicking on the details tab and choosing Output. Here, you can create multiple variables for the topic output and describe to the Copilot Studio Agent what is contained in the variable. This will help the Agent to come up with a suitable response for the user following the topic completion.

The creation of the output variable allows it to be set during the processing of the topic. For example, in our example, we might then fill the output variable with details of the leave request with the start date, end date, leave request comment, manager details and that it has been submitted for approval.

Using the Output variables gives a great way to control the information that should be given to the LLM so that it can use it to respond back to the user.

What is one of the key features when using Output variables is that you are not showing the raw data back to the user which could confuse the user as to what is important or not.

To help, lets take the following example, an agent that helps us creating a marketing campaign. This agent allows us to create a campaign over two weeks which builds up a story. The agent uses AI to generate the idea and we can ask it for the post ideas. Here, the agent returns a chunk of data. Now I used to output this via a message activity so that the agent had some content to respond back to the user in a more natural way. However, this means that you see a load of data coming back that is not nicely formatted and, as mentioned above, is confusing to the end user.

However, using the Output variables to capture that information, the output from the agent now looks like this.

There is no rubbish being output to the user instead, in the second example, we only see the result that the Agent displays, and this has been nicely formatted by the Agent’s LLM.

This is a much better experience for the user and leads to a conversational type flow from the agent to the flow, which feels nicer and more natural to interact with.

Conclusion

In this article, we explained how we can make use of key Copilot Studio Topic features, which allow us to use the power of Generative AI and LLMs to do the heavy lifting and detection of inputs. This helps us improve how our agents function, making them feel more natural and enhancing the conversational style of the agent when processing user requests and responding to them.