How Microsoft Copilot Can Help You Improve Compliance

Common concerns around Microsoft Copilot include the risk of oversharing, exposing security weaknesses in your content, privacy and IP infringement issues, and the potential for inaccurate or hallucinatory content. 

Yet, Copilot—and the technology that comes with it—may actually be the very tool you need to enhance compliance. 

Let’s explore some of these risks and how adopting Copilot can help. 

Oversharing and Security Weaknesses 

Oversharing is a common generative AI concern. It refers to situations where users might gain access to content they shouldn’t see through a Copilot response. 

Some of this concern stems from misconceptions about how Copilot works. 

Clarity #1: Copilot is designed to respect security permissions on content and will not include content in a response if the user does not have access to it. 

Clarity #2: Copilot is not trained on your organization's content. 

The generative AI model that Copilot uses has no awareness of your content and does not “train” or “learn” from it as you use it. Instead, Copilot uses a technique called RAG—Retrieval-Augmented Generation—to incorporate your organization’s content into its responses. This process occurs every time you make a Copilot request and is often referred to as “grounding the response.” 

How does grounding work? Depending on where you initiate your Copilot question (e.g., Outlook, Word, Excel, or the Microsoft 365 App), Microsoft searches through Microsoft Graph to find relevant content to include with your prompt. Because Graph-related content is user-specific, Microsoft primarily pulls information that you already access regularly. 

The bottom line: Copilot’s security model and RAG process make it highly unlikely that Copilot will return content you aren’t already aware of. However, if content is not properly secured, there is a possibility it will appear in a response—not because of Copilot’s design, but due to broader security issues. 

This brings us to the other half of the concern: poor security practices. 

Poor security isn’t just a Copilot issue. Employees can also find content through SharePoint search, attempt to guess content URL paths, or access documents that were accidentally shared with them. 

Where Copilot can help is through SharePoint Advanced Management (SAM), which is included with a Copilot license. Previously, SAM was only available through a separate SharePoint administrator-oriented license. Now, any company with at least one Copilot license can use SAM. 

SAM is an AI-enabled reporting and governance tool that helps you manage security risks by: 

  • Automatically discovering and managing sprawling SharePoint sites and content 

  • Using AI to identify content that contains sensitive information 

  • Most importantly: Allowing you to exclude specific content from Copilot results, regardless of security settings 

For organizations still concerned about oversharing, Microsoft provides an oversharing blueprint for Copilot rollouts, enabling a gradual implementation to ensure private, confidential, or improperly secured content does not appear in Copilot answers. 

Privacy and IP Protection 

The second major concern is privacy and intellectual property protection. 

Generative AI became widely known when ChatGPT launched in November 2022. Like phone apps and cloud-based applications before it, AI has quickly become a shadow IT concern in many organizations. If employees use non-enterprise generative AI services, your data may be at risk. 

Copilot ensures your content is protected. 

  • Microsoft’s architecture does not use your content to train the LLM. 

  • RAG ensures that content is only used for a single response, securely passed to the model for context, and never stored. 

  • While OpenAI’s ChatGPT can offer similar security, this is only available in Enterprise subscriptions. 

Even if only a few employees actively use generative AI, deploying Copilot improves security compared to shadow AI use. 

Another risk is accidental inclusion of copyrighted, trademarked, or other IP-protected material in AI-generated content. When you purchase Copilot, Microsoft indemnifies you against this risk through its Customer Copyright Commitment. 

This indemnity—an extension of Microsoft’s other contractual protections—covers any content created by a paid version of Copilot. If a Copilot-generated response leads to legal action, Microsoft will cover your legal fees and any party-to-party costs to the litigant. No unpaid generative services provide this indemnity. 

While protecting you, Microsoft also maintains that it does not claim IP ownership of content created by Copilot – another risk consideration with publicly-available generative models. 

Inaccurate or Hallucinatory Content 

The final concern is the potential for Copilot to produce inaccurate or hallucinatory responses. 

This is a valid concern, as it relates to how large language models (LLMs) function. Organizations should address this risk in their Copilot adoption plans through guidelines, training, and regular review of Copilot-generated content. 

How Microsoft Addresses This Issue: 

  1. User Warnings: Copilot responses include a disclaimer that the information may not be accurate and should be verified. 

  2. Improved Model Performance: Copilot uses ChatGPT-4o, which is significantly better at avoiding hallucinations compared to earlier models. 

  3. Prompt Engineering: Users can reduce inaccuracies by:  

    1. Instructing the model not to return unverifiable content 

    2. Asking the model to include content references in its responses 

  4. Copilot Studio: This tool allows organizations to:  

    1. Customize Copilot experiences 

    2. Limit content sources 

    3. Exclude the LLM’s external knowledge 

    4. Define pre-written prompt logic to increase accuracy 

For critical use cases, these strategies can help organizations achieve near-perfect accuracy. 

Additionally, Copilot reporting tools provide insights into how users interact with the tool, allowing administrators to review prompts and responses for quality control. 

Final Thoughts 

While generative AI introduces new risks and challenges, Microsoft’s Copilot architecture, customer indemnity, and access to SharePoint SAM help organizations manage and mitigate these risks thoughtfully. 

By leveraging Copilot strategically, you not only address security concerns but can also improve compliance in ways that would be difficult to achieve otherwise.

Need AI or compliance support? Let the Gravity Union team help you out with our AI services. Reach out with any questions!

Brian Edwards

Brian Edwards is the Director of Artificial Intelligence at Gravity Union, where he drives innovation in delivery and operations while enhancing customer success with AI. With over 25 years of consulting experience, he pioneered a collaboration practice in SharePoint in 2001 and has served on multiple Microsoft client advisory boards. Passionate about exploring new technology frontiers, he thrives on bringing education and insights to future adopters—keeping both his audiences’ minds and his own ADHD brain engaged. 

https://www.allofushumans.com/
Next
Next

Partnership Spotlight: Overview of Collabware