Mitigating AI Risks: Governance Best Practices for Microsoft 365 Copilot

NOTE: this article is based information from a session about AI risk and governance recommendations by Gravity Union team members Jas Shukla and Jeff Dunbar. To see them present this session, check out the webinar recording on the Gravity Union YT channel!

The integration of AI into business processes has brought significant advancements, but it also introduces a range of risks that organizations must address. According to research by Microsoft:

“97% of organizations had concerns about implementing an AI strategy despite initial excitement, due to lack of controls to detect and mitigate risks and leak of intellectual property through AI tools.”

This statistic highlights the need for governance practices to safeguard against the potential risks of AI.

In this article, we will explore organizational risks to be aware of when using generative AI, along with our AI experts’ governance recommendations to mitigate these risks during implementation and everyday use.

For the full AI governance best practices checklist, check our brochure below.

Understand AI Risks

Data Privacy and Security

Before implementing AI, stakeholders should ask: will AI expose our confidential data externally and/or internally? There may be legal implications and risks to your organization. It's crucial to consider whether AI breaches privacy rights when collecting information from users without consent.

Copyright or Intellectual Property Risks

Large Language Models (LLMs) are trained on a mix of existing content and images. This can pose risks related to copyright and intellectual property.

Inaccurate or Biased Information

LLMs are based on data that may not be up-to-date, accurate, or free from bias. Employees might make incorrect business decisions or responses based on inaccurate or incomplete information. This can lead to liability issues. It's important to ask: who is responsible if AI makes a mistake?

While using AI in your organization can pose the above risks, it’s unlikely you can ‘block’ AI across your organization. Many employees are already “bringing AI to work” by using generative AI in their day-to-day. And as we have shared in our Copilot Guide, there is a lot of organizational value to be gained from properly utilizing AI. Instead of avoiding AI, get ahead of these risks by starting your governance for AI now.

AI Governance Recommendations

1. Build AI into Your Strategies

Proactively build AI into your organizational strategies, understand and define how AI will play a role in your organization, and measure its value. Develop AI Usage Policies that include acceptable technologies and applications (e.g., ChatGPT vs. Copilot vs. internal applications vs. other third-party tools), and procedures to follow when using GenAI, such as validating information and refining feedback.

2. Develop Roles & Responsibilities

Implement processes, controls, and accountability structures. Identify who is responsible for AI within your organization and consider whether you need a risk officer who understands AI. Evaluate if you have or need AI experts.

3. Start Staff Education (ASAP)

Focus on AI strengths, bias awareness, and upskilling your team as soon as possible. Educate staff on policies and procedures for safe and responsible AI use, train them to validate AI outputs, and understand how LLMs work. Upskill employees to effectively use AI tools and write effective prompts and hire for advanced AI skills if necessary.

4. Design Your Content Source

Remember the AI is not a magic wand and won’t fix poor content, bad information architecture, ROT (Redundant, Obsolete, Trivial/Transitory content), multiple copies of the same data, or poor security and access controls. Establish a solid foundation with good information architecture, strong security, content controls, lifecycle management, and metadata in SharePoint.

5. Properly Secure Your Content

Implement robust security measures around important content. Use Sensitivity Labels and DLP policies to secure content. Leverage SharePoint Advanced Management (SAM) for additional controls such as site access restrictions, advanced reporting capabilities, conditional access policies for sites, and site lifecycle policies for inactive sites.

6. Manage the Lifecycle of Your Content

Automate content lifecycle management as much as possible. Implement a Records Management solution to dispose of content over time. Use Microsoft Purview Information Protection or third-party tools such as Collabspace to properly manage content lifecycle. Consider Copilot Prompt Retention and adjust your chat policies accordingly. Utilize Microsoft Purview’s AI Hub to monitor and report AI activity, identify gaps, and apply optimal data handling and storing policies.

Summary

AI introduces risks like data privacy, intellectual property issues, and inaccurate information, but it is essential to integrate AI into your organizational strategies. Build strong governance practices, secure your content, and manage its lifecycle effectively. Train your staff to use AI responsibly, and don’t be afraid to ask for additional support if you need it. The Gravity Union team offers AI services for readiness assessment, implementation, training, and more. Explore our Copilot Accelerator offerings to enhance your AI capabilities.

To ensure your organization is following these governance practices, check out our PDF checklist below:

Check out the webinar that inspired this article on our YouTube channel and contact us if you have any questions. We’re rooting for you to start with strong governance for a safe and successful organizational AI journey.

Nadia Lepak

Nadia has 6+ years of content marketing experience for B2B compliance-focused organizations with a focus on SaaS solutions, M365, and information governance.

Previous
Previous

Top 6 Change Management Strategies for AI Integration Success

Next
Next

Use M365’s Organizational Messages to Improve Your Communications and Change Management