Tuesday, February 24, 2026

Artificial Intelligence in the enterprise has evolved beyond chatbots that simply answer questions. Modern AI is expected to understand intent, apply business rules, interact with enterprise systems, and execute actions securely.

This is exactly where Copilot Studio and Power Automate come together.

This article walks through how a user’s natural language request is transformed into a governed, auditable enterprise action.

Step 1: User Interaction — The Conversational Entry Point

• Natural language interaction
Users interact with Copilot through Microsoft Teams, Microsoft 365 Copilot, or a custom Copilot. They express requirements in plain language without knowing backend systems or workflows.

• No execution at this stage
At this point, no automation, API call, or system action is triggered. Copilot only listens and captures the user’s request.

• Clear separation of intent and execution
This separation keeps the experience human-friendly while decoupling the user interface from backend logic.

Step 2: Intent Recognition and Topic Matching in Copilot Studio

• Topic-based intent detection
Copilot Studio maps the user’s message to predefined topics using trigger phrases and conditions. This helps identify what business scenario the user is requesting.

• Context-aware understanding
Copilot considers conversation context, not just a single sentence. This allows it to handle follow-up questions and multi-turn conversations.

• Foundation for automation
By identifying intent early, Copilot ensures the request is routed to the correct automation path.

Step 3: Entity Extraction and Clarification

• Structured data extraction
Copilot Studio extracts entities such as project name, owner, team members, or dates from the conversation.

• Intelligent validation
If information is missing or invalid, Copilot asks follow-up questions instead of failing silently.

• Reduced execution errors
This validation ensures clean inputs, which significantly reduces flow failures later.

Step 4: Invoking Power Automate as an Action

• Flow invocation from Copilot Studio
Once intent and inputs are finalized, Copilot Studio triggers a Power Automate flow as an action.

• Parameter-based handoff
All extracted entities are passed as structured parameters, ensuring consistency between conversation and execution.

• Clean architectural separation
Copilot handles conversation logic, while Power Automate handles execution logic.

Step 5: Power Automate Executes Enterprise Business Logic

• Orchestration of enterprise systems
Power Automate interacts with SharePoint, Teams, Dataverse, Outlook, and external systems.

• Complex logic handling
Flows can include conditions, approvals, loops, retries, and exception handling.

• Reliable and auditable execution
Each step is logged, monitored, and can be retried if needed.

Step 6: Security, Identity, and Governance Enforcement

• Identity-aware execution
Flows run under user context or service principals, ensuring correct authorization.

• Policy-driven controls
DLP policies, connector restrictions, and environment boundaries are enforced automatically.

• Enterprise compliance
This ensures no unauthorized access and prevents data leakage across systems.

Step 7: Returning Results to Copilot Studio

• Execution feedback from flows
Power Automate returns success or failure status along with output values like URLs or IDs.

• Intelligent response handling
Copilot Studio parses results and decides the next conversational step.

• Graceful failure handling
Errors can trigger retries, friendly error messages, or escalation workflows.

Step 8: Human-Friendly Response to the User

• Conversational response generation
Copilot converts technical results into a clear, human-readable message.

• Abstraction of complexity
Users never see flow steps, connectors, or APIs—only outcomes.

• Improved adoption
This simplicity increases user trust and reduces training needs.

Step 9: Monitoring, Auditing, and Continuous Improvement

• Built-in monitoring
Power Automate provides detailed run history and execution logs.

• Advanced diagnostics
Combined with Dataverse auditing and Application Insights, failures can be analyzed deeply.

• Continuous optimization
Admins can tune performance, improve reliability, and enforce governance at scale.

Here are some Real world examples

Example 1: SharePoint – Project Site Provisioning

• User request through Copilot
A business user asks Copilot: “Create a project site for Project Apollo and add the delivery team.”
The user does not need to know SharePoint templates, permissions, or provisioning steps.

• Copilot Studio handles intent and inputs
Copilot Studio identifies the “Project Site Provisioning” topic and extracts project name, owners, and members.
If details are missing, Copilot asks follow-up questions before proceeding.

• Power Automate executes provisioning
The flow creates a SharePoint site, applies a predefined template, and provisions a Microsoft Team.
Permissions are applied automatically based on governance rules.

• User receives confirmation
Copilot responds with the site URL and confirmation that the team has been added.
From the user’s perspective, the entire process feels instant and conversational.

Example 2: HR – Employee Onboarding Automation

• User request through Copilot
An HR manager asks: “Onboard a new employee joining next Monday.”
No forms or multiple systems are involved from the user’s side.

• Copilot Studio captures onboarding details
Copilot extracts employee name, role, department, manager, and start date.
Missing information is collected through a guided conversation.

• Power Automate runs onboarding workflow
The flow creates user accounts, assigns licenses, sets up email and Teams access, and updates HR records.
Approvals can be triggered automatically where required.

• Copilot delivers a clear status update
Copilot confirms that onboarding tasks are completed or in progress.
HR teams get consistency and speed without manual follow-ups.

Example 3: IT – Service Request and Access Management

• User request through Copilot
An employee asks: “Give me access to the finance SharePoint site.”
The user avoids ticket forms and complex IT portals.

• Copilot Studio identifies access request intent
Copilot Studio maps the request to an “Access Management” topic and extracts the target system.
Policy checks are applied to determine if approval is required.

• Power Automate enforces IT governance
The flow validates eligibility, triggers manager approval, and grants access if approved.
All actions are logged for compliance and auditing.

• Copilot closes the loop
The user receives confirmation once access is granted or pending approval.
IT teams maintain control while reducing manual effort.

Tuesday, February 17, 2026

Inside Copilot: Model Selection in Enterprise AI

One of the most common questions about Microsoft Copilot is deceptively simple:

"Which AI model is Copilot using?"

The real answer is more interesting and far more powerful,

Copilot doesn’t rely on a single AI model. It orchestrates multiple models and capabilities behind the scenes.

This article explains my understanding on how Copilot decides which model to use, when that decision is made, and why users are never asked to choose the model themselves.

Copilot Is an Orchestrator, Not a Model

Copilot itself is not an AI model like GPT or Claude. It is an AI orchestration layer embedded across Microsoft products such as Microsoft 365, GitHub, Dynamics, and Power Platform.

Its responsibility is to:

  • Understand user intent

  • Gather enterprise context

  • Apply security and compliance controls

  • Select the appropriate reasoning capability

  • Deliver results inside the application workflow

In short, Copilot acts as a control plane that decides how and where intelligence is applied.

Model Selection Is Not User-Driven

In consumer AI tools, users may explicitly choose models. In Copilot, model choice is intentionally hidden from users.

This is by design.

Enterprise users care about:

  • Accuracy

  • Security

  • Consistency

  • Business outcomes

Enterprise IT teams care about:

  • Compliance

  • Governance

  • Cost control

  • Predictable behavior

Allowing users to select models would break these guarantees. Instead, Copilot automatically makes the decision using a structured evaluation process.

1. User Intent

The first step in Copilot’s decision-making process is understanding user intent.

When a prompt is submitted, Copilot does not immediately forward it to an AI model. Instead, it classifies the request into intent categories such as:

  • Content creation (emails, documents, summaries)

  • Analytical reasoning (comparisons, recommendations)

  • Code-related tasks (generation, refactoring, review)

  • Data interaction (queries, aggregation, explanation)

  • Workflow or action-oriented tasks (tool invocation, automation)

This classification determines what type of reasoning is required, not just how long or complex the prompt appears.

For example:

  • Drafting text prioritizes language fluency and tone

  • Analytical tasks require multi-step reasoning

  • Coding tasks require structured, deterministic outputs

Only after intent is clearly identified does Copilot determine which reasoning capability is best suited.

2. Context Source and Grounding

Copilot is designed to be deeply context-aware, especially in enterprise environments.

Before choosing a model, Copilot evaluates:

  • Where the answer must come from

  • Which enterprise data sources are involved

  • How tightly the response must be grounded in factual data

Common grounding sources include:

  • Microsoft Graph (emails, meetings, files, chats)

  • GitHub repositories and pull requests

  • Dataverse and business systems

  • External connectors and APIs

Tasks that require strict grounding such as summarizing internal documents or reviewing contracts—are treated differently from open-ended brainstorming tasks.

The stronger the grounding requirement, the more Copilot prioritizes:

  • Large context window handling

  • Accuracy and traceability

  • Reduced hallucination risk

  • Policy enforcement

Grounding is therefore a major factor influencing how Copilot routes requests internally.

3. Complexity and Reasoning Depth

Not all prompts require the same level of reasoning.

Copilot evaluates:

  • The number of reasoning steps involved

  • Whether steps depend on each other

  • Whether intermediate conclusions need validation

  • Whether the task is exploratory or deterministic

Examples:

  • "Rewrite this sentence" - low complexity

  • "Compare two strategies and recommend one" - medium complexity

  • "Analyze data trends and explain trade-offs" - high complexity

For higher-complexity scenarios, Copilot may:

  • Select models optimized for multi-step reasoning

  • Break tasks into smaller internal steps

  • Apply internal checks before returning a final response

This ensures Copilot uses the minimum required intelligence while maintaining accuracy, performance, and cost efficiency.

4. Enterprise Security and Compliance

Before any request reaches an AI model, Copilot applies enterprise-grade governance controls.

These include:

  • Data loss prevention (DLP) policies

  • Sensitivity label enforcement

  • Tenant and identity boundaries

  • Prompt sanitization

  • Logging, auditing, and monitoring hooks

In some cases, compliance requirements may restrict:

  • Which models can be used

  • Where inference can occur

  • How responses are post-processed

These controls operate outside the AI model itself, but they directly influence whether and how a model is selected.

This governance layer is one of the key reasons Copilot cannot expose model selection to end users.

5. Availability, Performance, and Cost Optimization

Copilot operates at massive enterprise scale, which introduces real-world operational constraints.

It continuously evaluates:

  • Model availability

  • Regional capacity

  • Latency requirements

  • Throughput limits

  • Cost efficiency

If a preferred model is temporarily unavailable or under load, Copilot can dynamically:

  • Route requests to alternative models

  • Adjust execution paths

  • Optimize for response time or cost

From the user’s perspective, this process is invisible—but it is essential for delivering a consistent, reliable experience.


GPT, Claude, and Multi-Model Reasoning

While Microsoft does not publicly expose internal routing rules, the architectural pattern is clear.

Different models excel at different strengths:

  • Some are optimized for structured reasoning and tool usage

  • Others excel at long-context summarization and policy-aware language handling

Copilot may:

  • Use a primary model for generation

  • Invoke secondary models for validation or refinement

  • Apply multiple reasoning stages within a single request

All of this happens without user intervention.

Why Copilot Hides Model Choice from Users

This is a deliberate enterprise design decision. Exposing model choice would:

  • Complicate governance

  • Introduce inconsistent outputs

  • Increase operational risk

  • Undermine compliance guarantees

Instead, Copilot focuses on delivering predictable, secure, and outcome-driven intelligence.

Copilot decides which AI model to use based on intent, context, complexity, security, and performance—without exposing that decision to users.

That orchestration layer not the model itself is what makes Copilot enterprise-ready.

Monday, February 16, 2026

Enterprise Architecture and Reasoning in Practice: How Claude and GPT Coexist Inside Copilot

In this article i am explaining my understanfing on how Claude and GPT work together inside Microsoft Copilot, it helps to look at the system the way an enterprise architect would not as an AI chat experience, but as a layered intelligence platform designed for scale, governance, and long-term evolution.

In a real organization, Copilot sits at the very top of the stack, embedded directly into daily tools like Teams, Word, Excel, Outlook, and custom-built agents created in Copilot Studio. From the employee’s point of view, there is only one assistant. They never choose a model explicitly, and they never interact with Claude or GPT directly. Everything feels unified and seamless.

Behind that simplicity is a powerful orchestration layer. When a user asks Copilot a question, Copilot first interprets intent and determines what enterprise context is required. It may pull information from Microsoft Graph, SharePoint, OneDrive, emails, meetings, calendars, or connected business systems. This grounding step is critical, it ensures that responses are rooted in organizational reality rather than generic internet knowledge.

Before any request reaches an AI model, Copilot enforces identity, access control, and compliance policies. Microsoft Entra ID permissions are applied, sensitivity labels are respected, and tenant governance rules determine what data can be used. This is a key distinction between enterprise AI and consumer AI: the model never gets unrestricted access to data. It only sees what Copilot explicitly allows.

Once the request is grounded and secured, Copilot performs prompt shaping. This step refines the input, removes unnecessary details, and structures the prompt so that the chosen model can reason effectively. Only after this orchestration does Copilot select a reasoning engine — either a GPT model or Claude.

At this point, Claude and GPT behave as interchangeable but specialized reasoning components. They do not connect to Microsoft systems directly. They do not retain tenant memory. They simply process the prompt they are given and return an answer. Copilot then validates the response, applies responsible AI checks, formats it for the user’s context, and presents it inside the Microsoft experience.

This design allows Microsoft to offer model flexibility without sacrificing enterprise control. Intelligence becomes modular, while governance remains centralized.

How Claude and GPT Differ Inside the Same Architecture

Although Claude and GPT operate within the same Copilot-controlled pipeline, their reasoning styles are noticeably different and that difference is exactly why Microsoft chose to support both.

GPT models tend to feel fast, adaptive, and conversational. Inside Copilot, GPT excels at everyday productivity tasks: drafting emails, summarizing meetings, generating explanations, and helping users iterate quickly. Its strength lies in fluency and versatility. When users want speed, creativity, or conversational flow, GPT often feels like the natural fit.

Claude, by contrast, brings a more deliberate and structured reasoning approach. When Copilot routes a task to Claude, the output often feels calmer, more analytical, and more cautious. Claude is particularly strong at handling long documents, maintaining logical consistency across large contexts, and synthesizing complex or policy-heavy material. This makes it well suited for compliance analysis, HR policies, legal documents, architectural reasoning, and research-style tasks.

Another subtle but important difference lies in how ambiguity is handled. GPT often attempts to provide a helpful answer even when the input is loosely defined, sometimes filling gaps creatively. Claude is more inclined to acknowledge uncertainty, state assumptions explicitly, and avoid overconfident conclusions. In regulated enterprise environments, this behavior is often desirable rather than limiting.

What matters most is that Copilot shields users from these complexities. Employees simply ask questions and get answers. Architects and administrators, however, gain the ability to intentionally route workloads to the model whose reasoning style best matches the task. Copilot becomes the intelligence broker, deciding not just how to answer, but which kind of thinking is appropriate.


Why This Architecture Changes Enterprise AI Strategy

The coexistence of Claude and GPT inside Copilot represents a shift away from model-centric thinking. Enterprises no longer need to standardize on a single AI model and hope it performs well across every scenario. Instead, they can design AI solutions where different models are used intentionally, transparently, and safely.

Copilot becomes the stable foundation — the AI operating layer — while models evolve underneath it. As new models emerge or existing ones improve, organizations can adopt them without redesigning their entire AI strategy. Governance, security, compliance, and user experience remain consistent, even as intelligence becomes more powerful.

This is the same architectural principle that made cloud platforms successful, now applied to AI.

Claude and GPT inside Copilot are not competitors. They are complementary forms of intelligence, coordinated by a platform designed for enterprise realities. This approach signals a future where AI is no longer about choosing the “best” model, but about building systems that can think differently when needed  without losing control.

Thursday, August 23, 2018

SharePoint 2013 Site collection Launch page – Access Denied error


Recently I got an issue while login to one of the host-named web application. All of sudden when I try to open the site, it shows me access denied error. I have removed and added myself as Site collection Administrator and still same error. I have added myself in Super reader and Super User in Web application policy. But still same issue.

After checking for some more time, I found that I am able to access all the pages and libraries by entering the URL directly (Eg: /SitesPages/Home.aspx , /Pages/Default.aspx) . I have verified if this is issue with DNS entry, but I am able to navigate to the site but it is throwing access denied error. I have verified Authentication providers, if there is any issue with Claims authentication.  But nothing worked.

After some time I found this post on the issue. We got the issue due to Authentication settings got corrupted. After enabling and disabling the Anonymous access for the web application my issue got resolved.

Here are the steps to fix the issue,

Open SharePoint 2013 Central Administration site, click on Manage web applications.



Select the web application that has the issue and click on Authentication Providers from the ribbon bar.


Open the authentication provider zone we are using, We can see it as default.


Check the Enable anonymous access option and then Save the Authentication settings. It can take some time to save the settings, so after click Save just need to wait for the Authentication Setting window to close.





After the Authentication settings window closes we should be able to get to the home page without any Access Denied errors.

Now we don't want to leave anonymous access enabled so once again open the authentication provider zone and un-check Enable anonymous access and then Save the Authentication settings. We should now still be able to access the root site of web application and the issue has been fixed.

Thursday, July 12, 2018

SharePoint 2010 – Unable to edit cell in Datasheet view – “This cell is read only”


When I try to edit the excel in SharePoint 2010 list in Datasheet view, I see an error saying that “The Selected Cells are Read-only” as shown the image below.


  • I have verified user Permissions to the list. User Having Full control permissions to the list
  • Checked-out by any one (It is not document library)
  • Any Attachments having read-only – NO
  • Is current column is default system column? (Like Created, Modified Etc..) – NO
After checking all these details, I found Microsoft Support article about the fix. We have Content Approval enabled in the list and that is blocking DataSheet View edit. By Turning Off the Content Approval, We can fix the error.

Here are the steps to turn off content approvals,
  • In the List select Settings
  • Select List Settings
  • Select Versioning Settings
  • In the Content Approval section select No for "Require content approval for submitted items"

Now we can able to edit the items in DataSheet view.
Hope this helps.