Unknown's avatar

Posts by Simon Doy

I am an avid SharePoint enthusiast who works as an Independent SharePoint Consultant based in Leeds, United Kingdom. I am one of the organisers of the Yorkshire SharePoint User Group in the United Kingdom. I have been designing and building SharePoint solutions since 2006.

Build Better Agent Experiences for your Customers with Copilot Studio and Topic Variables


Introduction

At iThink 365, we have been building AI Agents using both Microsoft Copilot Studio and Azure AI Foundry. The Microsoft Copilot Studio product is constantly evolving and improving. When chatting with people, I often find that people are not aware of some really useful features that help you to build better and more intuitive agents.

In this post, I wanted to share a couple of tips on how you can improve the conversation flow of Agents built in Microsoft Copilot Studio, making them more intuitive and easier for you and your customers to use.

Input Variables

Copilot Studio is able to have input variables which are scoped at the topic level. This incredible feature allows you to make use of your Copilot Studio LLM to discover and fill the input variables based on how you tell the LLM to identify the information that should populate the variable.

This capability is really powerful and can help take a lot of the heavy lifting of detection, transformation and capturing of information for each topic.

These input variables are configured for a topic using the Details tab. Within the details tab you have three tabs: Topic details, Input and Output. Use the Input tab to configure the inputs to the topic.

These variables allow you to capture key parameters and information that you need for the topic to function properly.

Let’s go through an example. In this example, we are processing a user’s leave request. This is achieved by creating the leave or holiday request topic. With this topic, you might have a start date and end date of the leave request. You also have a comment or reason for the leave request. By creating an input variable for each input, such as:

  • Leave start date
  • Leave end date
  • Holiday comments

The topic will then have each variable automatically populated based on the input from the user. Let’s take the input from the user.

“I would like to go on holiday with my family from the 1st August to 14th August.”

If we configure the topic input variables correctly, then this user prompt will equate to the following:

  • Leave start date => 1st August 2025
  • Leave end date => 14th Augst 2025
  • Holiday comments => Taking a holiday with family.

These input variables are powerful, and they help simplify the topics. Rather than having a topic with a set of questions that ask for more information from the user. They use the LLM and its knowledge to fill out the input variables or fill the slots. This means that the conversation with the agent is much more natural, and a request can be put together in natural language as a sentence rather than a series of one-word prompts for each part of the request.

The image below shows an example if you were not using the input variables. You can see all the questions that the user would be asked for each of the dates for the leave request, this is not going to give a smooth or conversational fill to the agent.

The input variables can be configured so that you can give feedback if a variable cannot be filled. An example is shown below.

Using this approach helps you guide your user with what information to provide to get the topic to work correctly.

Now that we have talked about input variables, lets talk about how we can use output variables to improve the responses that Copilot Agents provide to the users.

Output Variables

When we first started building agents in Copilot Studio, one of the challenges we had was

“How do you help the Agent to respond with the right information?”

You can use activities such as the message activity to output responses back to the user.

Well, fortunately Copilot Studio has “Output” variables which can be used to capture the key information that the topic should include. This will be used by the Agent and LLM to make a suitable response to the user as a conversation.

How do we use and configure the output variable?

The output variables are configured at the same place as the input variables by clicking on the details tab and choosing Output. Here, you can create multiple variables for the topic output and describe to the Copilot Studio Agent what is contained in the variable. This will help the Agent to come up with a suitable response for the user following the topic completion.

The creation of the output variable allows it to be set during the processing of the topic. For example, in our example, we might then fill the output variable with details of the leave request with the start date, end date, leave request comment, manager details and that it has been submitted for approval.

Using the Output variables gives a great way to control the information that should be given to the LLM so that it can use it to respond back to the user.

What is one of the key features when using Output variables is that you are not showing the raw data back to the user which could confuse the user as to what is important or not.

To help, lets take the following example, an agent that helps us creating a marketing campaign. This agent allows us to create a campaign over two weeks which builds up a story. The agent uses AI to generate the idea and we can ask it for the post ideas. Here, the agent returns a chunk of data. Now I used to output this via a message activity so that the agent had some content to respond back to the user in a more natural way. However, this means that you see a load of data coming back that is not nicely formatted and, as mentioned above, is confusing to the end user.

However, using the Output variables to capture that information, the output from the agent now looks like this.

There is no rubbish being output to the user instead, in the second example, we only see the result that the Agent displays, and this has been nicely formatted by the Agent’s LLM.

This is a much better experience for the user and leads to a conversational type flow from the agent to the flow, which feels nicer and more natural to interact with.

Conclusion

In this article, we explained how we can make use of key Copilot Studio Topic features, which allow us to use the power of Generative AI and LLMs to do the heavy lifting and detection of inputs. This helps us improve how our agents function, making them feel more natural and enhancing the conversational style of the agent when processing user requests and responding to them.

My Adventures in building and understanding MCP with Microsoft 365 Copilot


So, I have been following the Model Context Protocol (MCP) world for a while now. I first heard about MCP just as we were going out to MVP Summit in March 2025.

Already, the Microsoft Copilot Extensibility team were on the case with people like Fabian Williams experimenting with them. I have been following this space, reading articles and finally, over the summer, I have had some time to roll up my sleeves and look at how I would build an MCP Server. Primarily with the aim of making it available to Microsoft 365 Copilot via Microsoft Copilot Studio and the Microsoft 365 Copilot extensibility world.

This article will be part of a blog series that describes the trials and tribulations of building an MCP Server.

The MCP Server I wanted to build was for a small demo that I wanted to create. The aim was to bring together Multi-Agents and MCP. The goal to create a solution that allows a marketing person to create a Marketing Campaign which describes a story for an ideal client and then allows the the creation of social media content on LinkedIn.

The idea was that we would have four Agents

  • Marketing Campaign Agent
  • Social Media Content Creator Agent
  • LinkedIn Posting Agent
  • Marketing Content Quality Assurance Agent

The plan was to make these agents available through Microsoft 365 Copilot and build them using Microsoft Copilot Studio. Multi-Agent support was launched at Microsoft Build 2025 in May and was made available to us in June 2025.

My first step was to sit down and started to do some investigation. I needed to answer questions such as:

  • How do we host MCP Servers?
  • How do we secure them?
  • How do we build them, deploy them, debug them?

Research

Like all good developers / solution architects / vibe coders …. I needed to get stuck in and we know we should research things first. Well, I ignored that for about an hour and then I thought I better understand how to build things before going any further.

So, I did a bit of researching and found a great article on building MCP Servers which were hosted within Aspire by Oleksii Nikiforov, here is the link to his posts.

From these posts I learnt a bit more about Aspire (which I have heard a lot about but never tried) and MCP Inspector which I had not heard about but quickly got to grips with.

The tutorials that Oleksii has put together are great and I quickly had an MCP Server running through Aspire which I could connect to with MCP Inspector.

Microsoft Product Groups are busy writing a number of different frameworks to build MCP Servers and the one that has a lot of momentum behind it is the MCP .NET SDK, https://github.com/modelcontextprotocol/csharp-sdk

The other framework that caught my attention, is the Microsoft Azure Function MCP Server Framework, which can be found on Github, https://learn.microsoft.com/en-us/samples/azure-samples/remote-mcp-functions-dotnet/remote-mcp-functions-dotnet/

I must admit I really like the idea of MCP Servers with Azure Functions. There are some great videos of how to build MCP Servers with Azure Functions and we will delve into them a little bit later.

However, from the research that I did it seemed that most people were building MCP servers using Containers, so I thought I will start there with the .NET SDK and using Oleksii’s approach.

There was quite a bit to learn which I will talk about next and then in the next blog post I’ll delve into building out the MCP server with the different approaches.

The final bit of research that I did was read about the MCP specification here, I will be honest I read it and got a bit more of an idea, but those RFC documents are hard work.

However, the MCP website is much nicer and easy to understand, so here is a link to the MCP Specification, https://modelcontextprotocol.io/specification/2025-03-26/basic

Microsoft 365 Copilot was quite good at giving me an overview of the protocol.

 Overview of MCP Protocol

MCP is built on JSON-RPC, using UTF-8 encoded messages for communication between clients and servers. It supports multiple transport mechanisms, allowing flexibility depending on deployment needs.

To understand the relationship between the different components have a read of the lifecycle process for the Model Context Protocol, https://modelcontextprotocol.io/specification/2025-06-18/basic/lifecycle.

MCP and Authentication

MCP and Authentication has been evolving and an area which was missing at the initial launch of MCP is now defined. I suspect that this will change and evolve with feedback.

I found the following guide really useful to understand Auth and its direction from this post by Den. Of course, these posts are going to be great. Den is one of the core maintainers of MCP and has some great articles and insights as to the design decisions.

OAuth In The MCP C# SDK: Simple, Secure, Standard · Den Delimarsky

https://den.dev/blog/mcp-csharp-sdk-authorization/

MCP Inspector

First, let’s talk about some tools and we should start with the MCP Inspector (https://github.com/modelcontextprotocol/inspector). This tool seems like the go to tool when testing out MCP Servers. I am sure there are more out there and I will be doing some research into those tool as well.

However, the tool looks like this:

The MCP Inspector allows you to integrate your MCP Server which is great, it supports Authentication via OAuth2 or Bearer Token.

Additionally it supports the main MCP Server Transports which will talk about shortly.

The solution that Oleksii has put together embeds a version of MCP Inspector and makes it easy to use. However, I found that this was an older version and got into the habit of using the following command to run the latest version of MCP Inspector from the cmd line.

npx @modelcontextprotocol/inspector dotnet run

I’ll be honest I do not remember using npx (Node Package Execute) before, but it has been around for a while. It is an amazing tool which is part of the npm-cli and npm package (Node Package Manager). It enables Node.js packages to be executed directly from the npm registry.

The other advantage of using npx to run MCP Inspector is that you can see what the MCP Inspector is up to more easily as it outputs logs to the command line.

MCP Transport Types

One of the first things that I needed to get my head around was the different MCP Transport types. These different communication protocols are used to enable MCP in different scenarios.

Let’s talk about these next.

STDIO Transport

This is the most lightweight and direct transport method.

  • How it works: The client launches the MCP server as a subprocess.
  • Communication:
    • Messages are sent via stdin and received via stdout.
    • Only valid JSON-RPC messages are allowed—no embedded newlines.
    • Logging (if any) is done via stderr.
  • Use case: Ideal for local development or tightly coupled systems where simplicity and low overhead are key

STDIO Transport allows a local MCP Client to instantiate and run a local MCP Server and talk to it through the command line. This is great for local MCP Clients like Visual Studio Code and Github Copilot, Claude etc


SSE (Server-Sent Events)

This was the original streaming mechanism used in earlier versions of MCP.

  • How it worked:
    • Clients would initiate an HTTP connection and receive a stream of server messages via SSE.
    • It allowed for real-time updates without polling.
  • Limitations:
    • SSE is unidirectional (server-to-client only).
    • It lacked flexibility for more complex bidirectional communication.
  • Status: Deprecated in favour of Streamable HTTP as of protocol version 2025-03-26

This is currently the transport of choice for MCP Servers built on Azure Functions, which caused me problems and made me rethink that approach. I know that the Azure Functions team will be working on resolving this issue.


Streamable HTTP (Current Standard)

This is the modern, flexible transport replacing SSE.

  • How it works:
    • The server runs independently and handles multiple clients.
    • Clients send JSON-RPC messages via HTTP POST requests.
    • The server can respond using either standard HTTP responses or SSE for streaming.
  • Security Considerations:
    • Servers must validate the Origin header to prevent DNS rebinding attacks.
    • Local servers should bind to localhost only.
    • Authentication is strongly recommended.
  • Use case: Best for scalable, production-grade deployments where streaming and multi-client support are needed

This is the current flavour of the week and if you are building MCP Servers that are going to run over a network then this is the approach you should be taking.

MCP Client

We are nearly at the end of this blog post, and I have not really talked about the MCP architecture and to be honest there are some great resources out there that do this. However, we need to talk about the main parts to an MCP ecosystem. The MCP Client is the consumer of MCP Servers. The MCP Inspector is an example of an MCP Client it can connect to an MCP Server, discover the resources, tools and how to authenticate from the MCP Server.

I can see that more and more tools will have MCP Clients built in to allow them to consume MCP Servers and use their capabilities.

For more information on the MCP Client, read https://modelcontextprotocol.io/specification/2025-06-18/client/roots

MCP Server

The MCP Server is part of the MCP architecture which exposes, resources, tools and prompts via the MCP primitives. They operate as independent components and should be built with a focused set of capabilities.

I am really fascinated to see how the protocol evolves to handle the challenges with different authentication approaches and types but this all happens and is described by the MCP Servers.

Fundamentally though the MCP Clients learn what is available for them by discovering the resources and tools when they interrogate the MCP Server.

Conclusion

In this blog post I set the scene for what I have been up to with my adventures into the Model Context Protocol space. I have tried to document my journey and resources that I have discovered. I talk about some of the components and tools and link to the resources that I hope you find useful.

In the next blog post I am going to talk about my experiences with building MCP Servers with the MCP .NET SDK and delve into different hosting models and the challenges with them as you look to build secure and encrypted MCP Servers.

Please connect with me on LinkedIn and Bluesky and would love to hear how you are getting on with building MCP resources.