- Suleman
Critical Flaw in Microsoft 365 Copilot Exposed Sensitive Data
Introduction
A cybersecurity researcher has disclosed a major vulnerability in Microsoft 365 Copilot that could be exploited to steal sensitive user data. Johann Rehberger discovered the bug and publicly disclosed it on 26 August. The bug features a combination of different complex techniques, such as prompt injection, automatic tool invocation, and an ASCII smuggling technique.
How the Attack Worked
Hidden prompt injection from this email or document was used for the initial point of attack Once activated, this injection told Microsoft 365 Copilot to get more emails and documents available without needing any user consent. The attacker then used ASCII smuggling to embed sensitive information within what appeared like innocuous hyperlinks through invisible Unicode characters. The hidden data usually multi-factor authentication (MFA) codes for security purposes would then be sent to an attacker’s server somewhere.
Response and Patch from Microsoft 365 Copilot
This vulnerability has been identified by Rehberger to Microsoft in January 2024. Deemed low severity at first, a proof of concept weaponized by Rehberger showed that the exploit chain can leak confidential data which resulted in Microsoft fixing the vulnerability as soon as July 2024. While Rehberger declined to provide specifics on the patch, he did say his initial exploit paths no longer function in the updated software.
While the flaw has already been fixed by Microsoft, this vulnerability highlights just how dangerous AI-powered tools such as these (based on powerful language models, or LLMs) can become when used in content processing without all their potential hazards considered. Data Loss Protection (DLP)— Security teams are urged to configure DLP as a type of action to protect from future unauthorized access efforts.
Preventing Future Attacks
But Rehberger cautioned that companies using AI tools such as Microsoft 365 Copilot even with a human in the loop, to avoid abuse or hacking of systems involving sensitive information and decisions need to assess their risk exposure. He suggested having better security controls in place to keep this data from leaking, ensuring the development and publication of such AI tools are carefully managed. Prompt injection attacks such as these especially demonstrate how AI integrated into corporate environments required to constantly monitor and protect it.
Conclusion
This latest vulnerability in Microsoft 365 copilot highlights the fact that the threat landscape is always evolving, particularly with AI tools personally handling such sensitive data and assisting us every day. Although Microsoft has addressed the problem, vigilance and robust security controls are necessary to counteract similar attacks in future. In such a world where prompt injection and ASCII smuggling are real possibilities, security best practices are key to maintaining user data.
FAQs
Attacks stole sensitive data by combining response injection and ASCII smuggling past vulnerability.
The attackers instructed Copilot to search for emails and documents, using prompt injection techniques secretly exfiling pilfered sensitive information in invisible Unicode characters hid within hyperlinks.
The vulnerability was reported to Microsoft in January 2024 and made public knowledge this past August.
Although the vulnerability was patched by Microsoft back in July of 2024, and its previous exploitation methods no longer work.
More importantly, they would have to evaluate their risk exposure and put in place a preventative action such as Data Loss Prevention (DLP) solution that will stop AI tools based attacks from happening at the first occurrence.