Your AI Tools Are Now a Cybersecurity Risk

Your AI Tools Are Now a Cybersecurity Risk

share:

You brought in AI tools to make your team faster. Your spreadsheets auto-summarise, your chatbot answers staff questions, and productivity is up across the board.

But here is the problem nobody warned you about: those same AI tools are now being used against you.

In the past week alone, two major incidents showed just how dangerous AI productivity tools can be when attackers get creative. And these are not theoretical risks. They are happening right now, to real businesses.

A Zero-Click Bug That Turns Excel Into a Spy

Microsoft’s March 2026 Patch Tuesday included a vulnerability that should make every business owner sit up and pay attention.

CVE-2026-26144 is a critical flaw in Microsoft Excel that weaponises Copilot Agent mode to silently steal your data. No clicking required. No malicious links to spot. Just opening (or even previewing) a compromised spreadsheet is enough.

Here is how it works: an attacker crafts an Excel file with hidden cross-site scripting code. When Copilot Agent processes the file, it triggers unintended network requests that send your data to an external server. The user sees nothing unusual. No pop-ups, no warnings, no suspicious behaviour.

“Information disclosure vulnerabilities are especially dangerous in corporate environments where Excel files often contain financial data, intellectual property, or operational records. If exploited, attackers could silently extract confidential information from internal systems without triggering obvious alerts.” — Alex Vovk, CEO, Action1

Think about what lives in your Excel files. Client pricing. Staff salaries. Project costings. Tender submissions. Financial forecasts. All of it potentially exfiltrated without anyone noticing.

This is not a bug in some obscure tool. This is Microsoft Excel with Copilot, the exact combination that millions of UK businesses are actively rolling out right now.

An AI Chatbot Hacked in Two Hours Flat

The second incident is arguably even more alarming.

Security researchers at CodeWall pointed an autonomous AI agent at McKinsey’s internal AI platform, Lilli. This is a chatbot used by over 40,000 McKinsey employees, processing more than 500,000 prompts every month.

Within two hours, the AI agent had achieved full read and write access to the entire production database. That included:

  • 46.5 million chat messages about strategy, mergers, and client engagements, all in plaintext
  • 728,000 files containing confidential client data
  • 57,000 user accounts
  • 95 system prompts controlling the AI’s behaviour, all writable

Let that sink in. An attacker could not only read everything the chatbot knew, they could change how it responds. Every consultant asking Lilli for advice could have been fed manipulated, poisoned information without knowing it.

The attack was fully autonomous. The AI agent researched the target, found exposed API documentation, identified 22 endpoints that required no authentication, discovered a SQL injection vulnerability, and exploited it. No human hacker sitting at a keyboard. Just one AI attacking another.

McKinsey patched the issues within hours of disclosure. But the lesson is clear: if one of the world’s most prestigious consultancies can get caught out, your business is not immune.

Why This Matters for UK SMBs

You might be thinking: “We are not McKinsey. We do not have a custom AI platform.” Fair point. But the underlying risk applies to every business adopting AI tools.

You are probably already exposed

If your business uses any of the following, you have AI-related attack surface to think about:

  • Microsoft 365 Copilot in Word, Excel, Outlook, or Teams
  • AI chatbots for customer service or internal knowledge bases
  • AI-powered CRM tools that summarise client interactions
  • Code assistants like GitHub Copilot for your development team
  • AI features in accounting software like Xero or QuickBooks

Each of these tools processes your sensitive data. Each one introduces new ways that data can be accessed, manipulated, or stolen.

The attack surface is growing faster than your defences

Traditional cybersecurity focuses on firewalls, antivirus, email filtering, and access controls. These are still essential. But AI tools create entirely new categories of risk:

  • Prompt injection: Attackers hide malicious instructions inside documents, emails, or web pages that your AI tools process.
  • Data exfiltration via AI agents: As the Excel bug showed, AI assistants can be tricked into sending your data to external servers.
  • Shadow AI: Staff using free AI tools to process company data without IT’s knowledge or approval.
  • Supply chain AI risk: Third-party software you rely on is embedding AI features. Each one is a potential entry point.

The sectors most at risk

Manufacturing, construction, and engineering firms often handle sensitive project data, tender documents, and intellectual property. Entertainment and media businesses deal with contracts, financial negotiations, and pre-release content. These are exactly the types of data that attackers target, and exactly the types of data your new AI tools are processing.

What You Can Do Right Now

The good news: you do not need to ban AI tools or go back to doing everything manually. You just need to be smart about how you adopt them.

1. Patch immediately, every time

The Excel Copilot bug was fixed in Microsoft’s March 2026 Patch Tuesday release. If you have not applied it yet, do it today. Not next week. Today. Set up a patching schedule that prioritises security updates. If you cannot patch immediately, restrict outbound network traffic from Office applications and monitor for unusual network requests from Excel processes.

2. Audit your AI tool usage

Do you actually know every AI tool your team is using? Conduct a quick audit: Which AI tools are officially approved? Which AI features are enabled in your existing software? Are staff using personal AI tools for work tasks? What data is each tool processing? You cannot secure what you do not know about.

3. Implement an AI acceptable use policy

Your team needs clear guidelines on which AI tools are approved, what types of data can and cannot be processed by AI tools, how to report suspicious AI behaviour, and rules around using personal AI accounts for company data. This does not need to be a 50-page document. A single clear page that everyone reads and signs is enough.

4. Apply the principle of least privilege to AI tools

Not every employee needs every AI feature. Consider disabling Copilot Agent mode for users who do not need it, restricting AI chatbot access to only the data each team requires, reviewing API permissions for any AI integrations, and segmenting sensitive data away from AI-accessible systems.

5. Monitor AI tool behaviour

Set up monitoring for unusual activity from AI-enabled applications: unexpected outbound network connections from Office apps, large data transfers from AI chatbot platforms, changes to AI system prompts or configurations, and unusual API call patterns.

6. Train your team

Your staff are your first line of defence. Make sure they understand that AI tools can be manipulated through the documents and emails they process, not to open unexpected Excel files from unknown sources, how to spot unusual AI behaviour and who to report it to, and why using unapproved AI tools puts the company at risk.

The Bigger Picture

AI adoption is not slowing down. UK businesses that embrace these tools will be more competitive, more efficient, and better positioned for growth. That is not in question.

But adopting AI without updating your security posture is like fitting a new front door while leaving the back windows wide open. The technology is powerful, and that power cuts both ways.

The businesses that will thrive are the ones that adopt AI tools thoughtfully, with proper security controls, clear policies, and ongoing monitoring. Not the ones that rush to enable every new feature without understanding the risks.

Need Help Securing Your AI Tools?

This is exactly the kind of challenge where having the right IT partner makes all the difference. At Magnetar IT, we help businesses across the Midlands adopt new technology securely, without slowing them down.

Whether you need help auditing your current AI tool usage, implementing security policies, or setting up monitoring, we have got you covered. With over 10 years of experience and a 98% client satisfaction rate, we do not just fix problems. We prevent them.

Get in touch today for a free consultation on securing your AI tools. Or call us to chat about your specific setup. No jargon, no pressure, just practical advice from people who understand both the technology and the business.

Date:

Author: Rafael Macedo

Inspired to improve your IT? Get in Touch!

Contact Us

Check out our social media: