TL;DR: I built an Azure-powered “Next Best Action” copilot that reads service desk tickets, searches a SharePoint-backed KB via Azure AI Search, and posts an HTML “do this next” note back to the ticket – without my Copilot Agent ever talking to SharePoint directly.
The goal was simple: Build a bot that could provide a “Next best action” for each new service desk query a user logged, based on an internal knowledgebase that lives on SharePoint.
One teeny tiny problem – most ticketing systems were never designed to talk directly to enterprise knowledge platforms like SharePoint, especially when they don’t support modern authentication from outbound webhooks or automations.
In my case, our helpdesk ticketing system could not authenticate against Microsoft 365 or custom APIs, but all of our IT knowledgebase articles lived in SharePoint. The goal was to build a Copilot agent that could utilise these articles stored on SharePoint, and:
- Accept ticket data from the ticketing system (unauthenticated)
- Search our SharePoint knowledgebase semantically for relevant information
- Return a “next best action” recommendation for each new ticket logged
To solve this, I built an Azure-based retrieval pipeline using:
- Azure Logic Apps (for SharePoint → Blob sync and summarisation) – Utising GPT 4o for chat completion.
- Azure Blob Storage (canonical KB store)
- Azure AI Search
- Azure OpenAI embeddings using the text-embedding-ada model
- Copilot Studio agent with 3 Topics to handle the incoming JSON from the ticketing system, the Azure AI search and Generative AI, and the writing of the “next best action” note to back to the ticket.
This post walks you through the entire solution, with full JSON so you can deploy your own version.
1. High-Level Architecture
Key ideas:
- SharePoint content is mirrored into Blob Storage via a Logic App
- Azure AI Search indexes the Blob content with a custom skillset.
- Azure OpenAI agent is used for both document summarisation and embeddings.
- Copilot Studio Agent talks to Azure AI Search and uses the results to produce the next best action output that is posted to the ticket as a note.
Pre-Requisites
-
Azure subscription with permissions to:
- Create Logic Apps (Consumption / Standard)
- Create Storage accounts and Blob containers
- Create Azure AI Search service
- Create Azure OpenAI / Azure AI Foundry project
-
Access to:
- SharePoint site & KB library
- Ticketing System API credentials / automation
- Copilot Studio
2. Logic App: Sync SharePoint Files to Azure Blob
First, we need a scheduled process that copies any updated SharePoint KB files into an Azure Storage account. This enables Azure AI Search to index the content without direct SharePoint access.
Highlights:
- Runs every 30 minutes.
- Queries SharePoint for files modified in the last 30 minutes (or your chosen window).
- Copies each file into a container called CopilotDocuments in the storage account.
- Sends a summary report to Microsoft Teams (optional but handy).
Steps:
- Create your Azure Storage account, and Blob, and then create your File share within the blob. I’ve called mine copilotdocuments

- Build your Logic App to look at your SharePoint library for changes, and copy any file that has been modified to your folder on your Azure storage account blob.
Logic App: (SharePoint to Blob File Copy)
SharePoint to Blob Logic App Code (Consumption):
{
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"contentVersion": "1.0.0.0",
"triggers": {
"Every_30_Minutes": {
"type": "Recurrence",
"description": "1 second delay added to allow for file changes.",
"recurrence": {
"frequency": "Minute",
"interval": 30,
"timeZone": "GMT Standard Time",
"startTime": "2025-11-18T00:00:01Z"
}
}
},
"actions": {
"Initialize_CopiedFiles": {
"type": "InitializeVariable",
"inputs": {
"variables": [
{
"name": "CopiedFiles",
"type": "Array",
"value": []
}
]
},
"runAfter": {}
},
"Get_files_modified_last_30_mins": {
"type": "ApiConnection",
"runAfter": {
"Initialize_CopiedFiles": ["Succeeded"]
},
"inputs": {
"host": {
"connection": {
"name": "@parameters('$connections')['sharepointonline']['connectionId']"
}
},
"method": "get",
"path": "/datasets/@{encodeURIComponent(encodeURIComponent('https://<YOUR_TENANT>.sharepoint.com/sites/<YOUR_SITE>'))}/tables/@{encodeURIComponent(encodeURIComponent('<YOUR_DOCUMENT_LIBRARY_GUID>'))}/items",
"queries": {
"folderPath": "/<YOUR_KB_FOLDER>",
"filter": "Modified ge datetime'@{addMinutes(utcNow(), -30)}'"
}
},
"runtimeConfiguration": {
"paginationPolicy": {
"minimumItemCount": 5000
}
}
},
"For_each_updated_file": {
"type": "Foreach",
"runAfter": {
"Get_files_modified_last_30_mins": ["Succeeded"]
},
"foreach": "@body('Get_files_modified_last_30_mins')?['value']",
"actions": {
"If_item_is_file": {
"type": "If",
"expression": {
"and": [
{
"equals": [
"@items('For_each_updated_file')?['{IsFolder}']",
false
]
},
{
"greaterOrEquals": [
"@items('For_each_updated_file')?['Modified']",
"@addMinutes(utcNow(), -30)"
]
}
]
},
"actions": {
"Get_file_content": {
"type": "ApiConnection",
"inputs": {
"host": {
"connection": {
"name": "@parameters('$connections')['sharepointonline']['connectionId']"
}
},
"method": "get",
"path": "/datasets/@{encodeURIComponent(encodeURIComponent('https://<YOUR_TENANT>.sharepoint.com/sites/<YOUR_SITE>'))}/files/@{encodeURIComponent(items('For_each_updated_file')?['{Identifier}'])}/content",
"queries": {
"inferContentType": true
}
}
},
"Create_blob": {
"type": "ApiConnection",
"runAfter": {
"Get_file_content": ["Succeeded"]
},
"inputs": {
"host": {
"connection": {
"name": "@parameters('$connections')['azureblob']['connectionId']"
}
},
"method": "post",
"path": "/v2/datasets/@{encodeURIComponent(encodeURIComponent('<YOUR_STORAGE_ACCOUNT_NAME>'))}/files",
"headers": {
"ReadFileMetadataFromServer": true
},
"body": "@body('Get_file_content')",
"queries": {
"folderPath": "/copilotdocuments/@{items('For_each_updated_file')?['{Path}']}",
"name": "@items('For_each_updated_file')?['{FilenameWithExtension}']",
"queryParametersSingleEncoded": true
}
},
"runtimeConfiguration": {
"contentTransfer": {
"transferMode": "Chunked"
}
}
},
"Append_to_CopiedFiles": {
"type": "AppendToArrayVariable",
"runAfter": {
"Create_blob": ["Succeeded"]
},
"inputs": {
"name": "CopiedFiles",
"value": "@items('For_each_updated_file')?['{FullPath}']"
}
}
},
"else": {
"actions": {}
}
}
}
},
"If_any_files_copied": {
"type": "If",
"runAfter": {
"For_each_updated_file": ["Succeeded"]
},
"expression": {
"greater": [
"@length(variables('CopiedFiles'))",
0
]
},
"actions": {
"Post_message_in_Teams": {
"type": "ApiConnection",
"inputs": {
"host": {
"connection": {
"name": "@parameters('$connections')['teams']['connectionId']"
}
},
"method": "post",
"path": "/beta/teams/conversation/message/poster/Flow bot/location/@{encodeURIComponent('Chat with Flow bot')}",
"body": {
"recipient": "<YOUR_EMAIL_OR_CHANNEL>",
"messageBody": "@{concat('<p><strong>Next Best Action SharePoint Logic App file copy report</strong></p><p>', formatDateTime(convertFromUtc(utcNow(), 'GMT Standard Time'), 'dd MMMM yyyy - HH:mm'), ' - Files copied to blob: ', string(length(variables('CopiedFiles'))), '<ul><li>', join(variables('CopiedFiles'), '</li><li>'), '</li></ul>')}"
}
}
}
},
"else": {
"actions": {}
}
}
},
"outputs": {},
"parameters": {
"$connections": {
"type": "Object",
"defaultValue": {}
}
}
}
Replace the placeholders like <YOUR_TENANT>, <YOUR_STORAGE_ACCOUNT_NAME>, etc. with your own values.
3. Logic App: Document Summarisation via Azure OpenAI
To improve search quality and reduce noise, I summarise each document into 2–3 sentences using GPT-4o running in Azure AI Foundry. Of course, you could use any chat completion bot you wish.
Azure AI Search calls this Logic App as a Web API skill:
e.g:
{
"values": [
{
"recordId": "1",
"data": {
"text": "Document text here..."
}
}
]
}
Requirements:
- An Azure OpenAI resource with a chat model (e.g.
gpt-4o). - An HTTP-triggered Logic App.
Steps:
- Create a new Foundry project, and deploy a GPT 4o (or 4o mini) model.
- Build your Logic App and call your GPT 4o model via HTTP REST calls to summarise each file.
Logic App: (File Summarisation via GPT Model in Foundry)
Logic App for Document Summarisation:
{
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2023-01-31-preview/workflowDefinition.json#",
"contentVersion": "1.0.0.0",
"triggers": {
"When_an_HTTP_request_is_received": {
"type": "Request",
"kind": "Http",
"inputs": {
"schema": {}
}
}
},
"actions": {
"Initialize_Results": {
"type": "InitializeVariable",
"runAfter": {},
"inputs": {
"variables": [
{
"name": "results",
"type": "Array",
"value": []
}
]
}
},
"For_Each_Record": {
"type": "Foreach",
"runAfter": {
"Initialize_Results": ["Succeeded"]
},
"foreach": "@triggerBody()['values']",
"actions": {
"Log_RecordId": {
"type": "Compose",
"runAfter": {},
"inputs": "@{items('For_Each_Record')?['recordId']}"
},
"Log_Text": {
"type": "Compose",
"runAfter": {
"Log_RecordId": ["Succeeded"]
},
"inputs": "@{coalesce(items('For_Each_Record')?['data']?['text'], '')}"
},
"Check_Text_Length": {
"type": "If",
"runAfter": {
"Log_Text": ["Succeeded"]
},
"expression": "@greater(length(coalesce(items('For_Each_Record')?['data']?['text'], '')), 0)",
"actions": {
"Call_OpenAI": {
"type": "Http",
"inputs": {
"uri": "https://<YOUR_OPENAI_RESOURCE>.openai.azure.com/openai/deployments/<YOUR_CHAT_DEPLOYMENT>/chat/completions?api-version=2024-XX-XX",
"method": "POST",
"headers": {
"Content-Type": "application/json",
"api-key": "<YOUR_OPENAI_API_KEY>"
},
"timeout": "PT25S",
"body": {
"messages": [
{
"role": "system",
"content": "Summarize the following document in 2–3 sentences. Exclude signatures, footer boilerplate, and contact blocks."
},
{
"role": "user",
"content": "@{coalesce(items('For_Each_Record')?['data']?['text'], '')}"
}
],
"temperature": 0.3,
"max_tokens": 500
}
}
},
"Compose_Summary_Result": {
"type": "Compose",
"runAfter": {
"Call_OpenAI": ["Succeeded"]
},
"inputs": {
"recordId": "@{items('For_Each_Record')?['recordId']}",
"data": {
"summary": "@{coalesce(body('Call_OpenAI')?['choices']?[0]?['message']?['content'], '')}",
"finish_reason": "@{coalesce(body('Call_OpenAI')?['choices']?[0]?['finish_reason'], 'error')}",
"token_usage": "@{coalesce(body('Call_OpenAI')?['usage']?['total_tokens'], 0)}"
}
}
},
"Append_Summary": {
"type": "AppendToArrayVariable",
"runAfter": {
"Compose_Summary_Result": ["Succeeded"]
},
"inputs": {
"name": "results",
"value": "@outputs('Compose_Summary_Result')"
}
}
},
"else": {
"actions": {
"Compose_EmptyText_Result": {
"type": "Compose",
"inputs": {
"recordId": "@{items('For_Each_Record')?['recordId']}",
"data": {
"summary": "No meaningful content found in document.",
"finish_reason": "skipped",
"token_usage": 0
}
}
},
"Append_EmptyText": {
"type": "AppendToArrayVariable",
"runAfter": {
"Compose_EmptyText_Result": ["Succeeded"]
},
"inputs": {
"name": "results",
"value": "@outputs('Compose_EmptyText_Result')"
}
}
}
}
}
}
},
"Return_Batch_Response": {
"type": "Response",
"runAfter": {
"For_Each_Record": ["Succeeded"]
},
"inputs": {
"statusCode": 200,
"body": {
"values": "@variables('results')"
}
}
}
},
"parameters": {}
}
Azure AI Search expects this Web API skill to follow the standard Cognitive Search Web API contract (a values array in, a values array out).
4. Azure AI Search Skillset: Enrichment Pipeline
Now we stitch together key phrase extraction, entity recognition, conditional routing, and the summarisation Web API into a single skillset.
Key elements:
KeyPhraseExtractionSkillforkeyPhrasesSplitSkillto clean and normalise contentConditionalSkillto avoid unnecessary summarisation callsWebApiSkillto call the Logic App aboveEntityRecognitionSkill V3for people/locations/organisations/emails/URLs
Azure AI Search skillset JSON :
{
"name": "nba-enrichment-skillset",
"description": "Enrich documents with key phrases, entities, and summaries using cleaned sentence chunks",
"skills": [
{
"@odata.type": "#Microsoft.Skills.Text.KeyPhraseExtractionSkill",
"name": "keyPhraseSkill",
"description": "Extract key phrases from content",
"context": "/document",
"defaultLanguageCode": "en",
"inputs": [
{
"name": "text",
"source": "/document/content"
}
],
"outputs": [
{
"name": "keyPhrases",
"targetName": "keyPhrases"
}
]
},
{
"@odata.type": "#Microsoft.Skills.Text.SplitSkill",
"name": "textCleaner",
"description": "Split and clean noisy content for better entity recognition",
"context": "/document",
"defaultLanguageCode": "en",
"textSplitMode": "sentences",
"maximumPageLength": 5000,
"inputs": [
{
"name": "text",
"source": "/document/content"
}
],
"outputs": [
{
"name": "textItems",
"targetName": "cleanedContent"
}
]
},
{
"@odata.type": "#Microsoft.Skills.Util.ConditionalSkill",
"name": "summarizationInputRouter",
"description": "Route content to summarization only if not empty",
"context": "/document",
"inputs": [
{
"name": "condition",
"source": "/document/content"
},
{
"name": "whenTrue",
"source": "/document/content"
},
{
"name": "whenFalse",
"source": "/document/fallbackSummary"
}
],
"outputs": [
{
"name": "output",
"targetName": "summarizationInput"
}
]
},
{
"@odata.type": "#Microsoft.Skills.Custom.WebApiSkill",
"name": "summarizationSkill",
"description": "Generate a summary of the document",
"context": "/document",
"uri": "https://<YOUR_SUMMARISATION_LOGIC_APP_URL>",
"httpMethod": "POST",
"timeout": "PT30S",
"batchSize": 1000,
"inputs": [
{
"name": "text",
"source": "/document/summarizationInput"
}
],
"outputs": [
{
"name": "summary",
"targetName": "summary"
},
{
"name": "finish_reason",
"targetName": "finish_reason"
},
{
"name": "token_usage",
"targetName": "token_usage"
}
],
"httpHeaders": {}
},
{
"@odata.type": "#Microsoft.Skills.Text.V3.EntityRecognitionSkill",
"name": "entitySkillV3",
"description": "Extract named entities using V3 skill from cleaned content",
"context": "/document",
"categories": [
"Person",
"Organization",
"Location",
"Email",
"URL"
],
"defaultLanguageCode": "en",
"inputs": [
{
"name": "text",
"source": "/document/cleanedContent"
}
],
"outputs": [
{
"name": "persons",
"targetName": "recognizedPersons"
},
{
"name": "locations",
"targetName": "recognizedLocations"
},
{
"name": "organizations",
"targetName": "recognizedOrganizations"
},
{
"name": "emails",
"targetName": "recognizedEmails"
},
{
"name": "urls",
"targetName": "recognizedURLs"
}
]
}
],
"cognitiveServices": {
"@odata.type": "#Microsoft.Azure.Search.AIServicesByKey",
"key": "<YOUR_COGSERVICES_KEY>",
"subdomainUrl": "https://<YOUR_COGSERVICES_SUBDOMAIN>.cognitiveservices.azure.com/"
}
}
5. Azure AI Search Indexer: Wiring Blob + Skillset → Index
Next, you need an indexer that reads from your Blob data source, runs the skillset above, and writes to your Azure AI Search index (for me, index-nba).
Azure AI Search indexer JSON:
{
"name": "indexer-nba",
"dataSourceName": "<YOUR-AZURE-DATASOURCE>",
"skillsetName": "nba-enrichment-skillset",
"targetIndexName": "index-nba",
"disabled": false,
"schedule": {
"interval": "PT30M",
"startTime": "2025-11-18T00:40:00Z"
},
"fieldMappings": [],
"outputFieldMappings": [
{ "sourceFieldName": "/document/keyPhrases", "targetFieldName": "keyPhrases" },
{ "sourceFieldName": "/document/persons", "targetFieldName": "recognizedPersons" },
{ "sourceFieldName": "/document/locations", "targetFieldName": "recognizedLocations" },
{ "sourceFieldName": "/document/organizations", "targetFieldName": "recognizedOrganizations" },
{ "sourceFieldName": "/document/emails", "targetFieldName": "recognizedEmails" },
{ "sourceFieldName": "/document/urls", "targetFieldName": "recognizedURLs" },
{ "sourceFieldName": "/document/teams", "targetFieldName": "recognizedTeams" },
{ "sourceFieldName": "/document/processes", "targetFieldName": "recognizedProcesses" },
{ "sourceFieldName": "/document/finish_reason", "targetFieldName": "finish_reason" },
{ "sourceFieldName": "/document/token_usage", "targetFieldName": "token_usage" },
{ "sourceFieldName": "/document/summary", "targetFieldName": "summary" }
]
}
Make sure:
- Your Blob data source is configured and named in
dataSourceName. - The target index exists (
index-nbain my case). - All target fields exist in the index schema.
6. Azure AI Search Index & Embeddings
Here’s my Azure AI Search Index showing the fields tab
For the index itself, I configured:
- A vector field
embeddingwith dimension768. - Semantic configuration using
content,keyPhrases,summary, and organisation entities. - HNSW vector search with cosine similarity (
m=4,efConstruction=400,efSearch=500).
For embeddings, I deploy an Azure OpenAI model that’s good with embeddings:
- Model name:
text-embedding-ada - Deployment name:
text-embedding-ada - Used by Azure AI Search as the vectorizer

Your Search service can either:
- Use a built-in vectorizer referencing this deployment, or
- Let your ETL pipeline generate embeddings and write them into the
embeddingfield.
7. Copilot Studio Agent: Topics 1–3
The retrieval pipeline is now in place. The Copilot studio agent itself is implemented using three Topics:
Topic 1 – Parse Inbound Ticket
- Triggered by our ticketing system via Power Automate / Logic App / API.
- Accepts JSON describing the ticket, including subject and description.
- Normalises into a
ProblemDescriptionvariable (e.g. “User cannot connect to VPN from home, error 809.”).
Here’s Topic 1’s Copilot Studio Agent:
Topic 1 – YAML:
kind: AdaptiveDialog
beginDialog:
kind: OnActivity
id: main
priority: 0
type: Message
actions:
- kind: SetVariable
id: setIncomingJson
displayName: Set Incoming JSON
variable: Global.InboundPayload
value: =System.Activity.ChannelData
- kind: ParseValue
id: parseInboundJson
displayName: Parse Inbound JSON
variable: Global.ParsedPayload
valueType:
kind: Record
properties:
Ticket_Values:
type:
kind: Record
properties:
Subject: String
Description: String
TicketID: String
value: =Global.InboundPayload
- kind: SetVariable
id: extractTicketId
displayName: Extract Ticket ID
variable: Global.TicketID
value: =Global.ParsedPayload.Ticket_Values.TicketID
- kind: SendActivity
id: sendTicketId
activity: "{\"Ticket ID is: \" & Global.TicketID}"
- kind: SetVariable
id: extractDescription
displayName: Extract Ticket Description
variable: Global.TicketDescription
value: =Global.ParsedPayload.Ticket_Values.Description
- kind: SendActivity
id: sendDescription
activity: "{\"Ticket Description is: \" & Global.TicketDescription}"
- kind: BeginDialog
id: startSearchDialog
dialog: generic.topic.GroundedSearch
inputType: {}
outputType: {}
Topic 2 – Azure AI Search Retrieval
- Takes
ProblemDescriptionas input. - Calls Azure AI Search (
index-nba) using hybrid search (full text + semantic + vector). - Returns top N documents including summary, key phrases, and metadata (file name, path).
In practice this is done with a Power Automate cloud flow or Azure Function that wraps the Azure AI Search REST API and is called from the topic using an action.
Topic 2 Copilot Studio Agent:
Topic 2 – YAML:
kind: AdaptiveDialog
modelDescription: Example implementation of an AI Search workflow.
beginDialog:
kind: OnRedirect
id: main
priority: 2
actions:
- kind: SearchAndSummarizeContent
id: summarizeForSearch
displayName: Summarise ticket for search query
variable: Global.GeneratedSummary
userInput: Using the ticket details, provide a summary under 50 characters.
additionalInstructions: |
Ticket subject: {Global.ParsedJSON.Ticket_Values.Subject}
Ticket description: {Global.ParsedJSON.Ticket_Values.Description}
responseCaptureType: FullResponse
- kind: CreateSearchQuery
id: createSearchQuery
userInput: =Global.GeneratedSummary.Text.Content
result: Global.SearchQuery
- kind: InvokeConnectorAction
id: invokeSearch
input:
binding:
indexName: generic-index
searchText: =Global.SearchQuery.SearchQuery
selectFields:
- Value: content
- Value: recognizedURLs
- Value: title
- Value: keyPhrases
- Value: recognizedProcesses
semanticConfiguration: Generic-Semantic
top: 10
vectorizedSearchFields:
- Value: embedding
output:
kind: SingleVariableOutputBinding
variable: Topic.SearchResults
connectionReference: shared.search.connector
connectionProperties:
name: shared.search.connector
mode: Maker
operationId: SemanticHybridSearch
- kind: ParseValue
id: parseSearchResults
displayName: Parse Results
variable: Topic.TempSearchResult
valueType:
kind: Table
properties:
content: String
keyPhrases: String
recognizedProcesses: String
recognizedURLs: String
reRankerScore: Float
score: Float
title: String
value: =Topic.SearchResults
- kind: SearchAndSummarizeContent
id: generateFinalResponse
displayName: Generate Bot Response
autoSend: true
variable: Global.FinalBotResponseFull
userInput: =Global.GeneratedSummary.Text.Content
additionalInstructions: |
You are an IT support assistant. Provide a concise, accurate answer
based strictly on the ticket context and the search results.
### Ticket Context
- Subject: {Global.ParsedJSON.Ticket_Values.Subject}
- Description: {Global.ParsedJSON.Ticket_Values.Description}
### Requirements
- Base your answer ONLY on verified information.
- Do NOT invent details.
- Keep it concise for an IT support analyst.
- If search results are not useful, state that clearly.
Keep the response under 5 sentences unless necessary.
responseCaptureType: FullResponse
- kind: BeginDialog
id: transferToNextTopic
displayName: Transfer to next topic
input:
binding:
FinalBotResponse: =Global.FinalBotResponseFull.Text.Content
dialog: generic.topic.CaptureBotResponse
output: {}
inputType: {}
outputType: {}
Topic 3 – Generate the Next Best Action
- Accepts the ticket text and the list of retrieved KB articles.
- Uses a carefully designed system prompt to instruct the model to:
- Explain the issue in plain language.
- Recommend an ordered set of actions.
- Reference specific KB article titles (no made-up content).
- Returns a formatted note that is written back into the ticketing system (e.g. as a ticket note).
At no point does our ticketing system talk to SharePoint directly, and the agent is fully grounded on the enriched content in Azure AI Search.
Topic 3 Copilot Studio Agent:
Topic 3 – YAML:
kind: AdaptiveDialog
modelDescription: Capture and transform bot-generated responses
beginDialog:
kind: OnRedirect
id: main
actions:
- kind: SetVariable
id: saveResponse
displayName: Pull response from previous topic
variable: Global.FinalBotResponse
value: =Topic.FinalBotResponse
- kind: SendActivity
id: debugMessage
displayName: Debug message
activity: |
(DEBUG) Stored answer:
{Global.FinalBotResponse}
- kind: SearchAndSummarizeContent
id: buildNextBestAction
displayName: Generate Final Ticket Response
autoSend: false
variable: Global.WrittenNote
userInput: Generate a next best action for a helpdesk analyst for this ticket.
additionalInstructions: |
Below is the AI-generated answer from the previous step:
{Global.FinalBotResponse}
### Your Task
Convert the above answer into a clear **Next Best Action (NBA)** for an experienced IT support analyst.
### Requirements
- Output must be clean, well-formatted **HTML**.
- Present the steps as a numbered list
- Assume the audience is a trained IT support professional.
- Do NOT mention any internal tools, ticket systems, or procedural boilerplate.
- Only use steps that follow from the previous answer or verified search content.
- If no meaningful actions exist, output a short explanation only.
- If no knowledge is available, output a single sentence with no steps or references.
### Output Format
- A short summary paragraph.
- Followed by a numbered HTML list.
- Followed by any relevant reference links.
responseCaptureType: FullResponse
- kind: SendActivity
id: showWrittenNote
activity: "Written Note: {Global.WrittenNote.Text.Content}"
- kind: SetVariable
id: checkIfUseful
displayName: Check if output is useful
variable: Topic.FinalWrittenNote
value: |
=If(
IsBlank(Global.WrittenNote.Text.Content),
"No useful information was found within the available knowledge.
",
Global.WrittenNote.Text.Content
)
- kind: SetVariable
id: setCitedReference
displayName: Set Cited Reference URL
variable: Topic.CitedReferences
value: |
=If(
IsEmpty(Global.WrittenNote.Text.CitationSources),
Blank(),
First(Global.WrittenNote.Text.CitationSources)
)
- kind: SetVariable
id: replaceBlobUrl
displayName: Replace Source URL with SharePoint URL
variable: Topic.FinalReferenceURL
value: |
=If(
IsEmpty(Global.WrittenNote.Text.CitationSources),
"No reference documentation available",
Substitute(
First(Global.WrittenNote.Text.CitationSources).Url,
".blob.core.windows.net/copilotdocuments",
".sharepoint.com/sites/knowledge"
)
)
- kind: SendActivity
id: sendReferenceUrl
activity: "Final Reference URL: {Topic.FinalReferenceURL}"
- kind: SetVariable
id: setFinalNote
displayName: Set Note Text
variable: Global.TicketNote
value: |
="AI Generated Next Best Action. Please Verify." & Topic.FinalWrittenNote & "References: " & Topic.FinalReferenceURL & "Feedback appreciated."
- kind: SendActivity
id: writingNote
activity: Writing note to ticket.
- kind: BeginDialog
id: writeNoteToTicket
displayName: Write note to ticket
dialog: generic.action.WriteNote
output:
binding:
conversation: Topic.conversation
- kind: EndConversation
id: endConv
inputType:
properties:
FinalBotResponse:
displayName: FinalBotResponse
type: String
outputType: {}
8. Handling Errors
Things of course can (and do) go wrong. you’ll notice within my YAML code about that i’ve got some basic error handling to handle blank outputs:
If(
IsBlank(Global.WrittenNote.Text.Content),
"No useful information was found within the available knowledge.",
Global.WrittenNote.Text.Content
)
9. Final Output
Once a ticket has been logged in the ticketing system, a HTTP request is sent to my Topic 1 Copilot studio agent, which parses the incoming JSON from the ticketing system and summarises it before passing it to Topic 2.
Topic 2’s agent utilises Azure AI search to ground the users ticket information against our internal SharePoint knowledgebase articles that have been copied across to an Azure Blob storage account. It then passes these results to Topic 3.
Topic 3 then utilises Generative AI to create the next best action based on the search output, and writes this back to the ticket as a note.
Original Ticket:

Output approximately 30 seconds later:

Pretty nifty, hey!
10. Security Considerations
Of course, as part of your deployment there are a few things to consider when it comes to securing the solution:
- Blob containers and Search service are private and accessed via managed identities where possible.
- Logic Apps and Skillsets do not expose keys or internal URLs to the ticketing system.
- The ticketing system only talks to Copilot Studio endpoints, never to Azure resources directly.
Want to harden it even further? Consider these additional steps:
-
Use private endpoints / VNet integration for:
- Azure AI Search
- Storage accounts
- OpenAI/Fountry endpoint (where applicable)
-
Make sure sensitive KB content is classified:
e.g. “Don’t dump HR documents or PII into the KB that this copilot searches unless you’ve implemented role-based access controls.”
11. Summary
By combining Logic Apps, Azure Blob Storage, Azure AI Search, Azure OpenAI, and Copilot Studio, it is possible to build a powerful “Next Best Action” copilot that:
- Works even when your ticketing system cannot authenticate outbound calls.
- Safely uses SharePoint as the source of truth for your knowledgebase.
- Provides summarised, context-aware, and grounded recommendations to analysts.
You can use the JSON snippets in this post as a starting point and adapt the field names and configuration for your own environment.








