Press "Enter" to skip to content

DeepSeek Leaks a Million Chat Records—And the Pentagon Wants Nothing to Do with It

I wanted to do some security followups too from stories this week.

DeepSeek has exposed two unsecured databases containing over one million chat records, including sensitive user data and operational information. The databases were discovered by Wiz Research during a security assessment and were publicly accessible without authentication. The exposed information included user queries, authentication keys, and internal infrastructure details dating back to early January 2025. Following this discovery, DeepSeek promptly addressed the issue, making the databases no longer public.

The Pentagon is taking urgent measures to block DeepSeek, a popular AI chatbot, after reports surfaced that Department of Defense employees connected their work computers to Chinese servers for at least two days. DeepSeek’s terms of service indicate that user data is stored in China and governed by Chinese law, which requires cooperation with intelligence agencies. The U.S. Navy has already restricted access to DeepSeek due to security and ethical issues.

They aren’t alone.  Hundreds of companies, particularly those with government connections, have blocked the Chinese chatbot DeepSeek due to concerns over potential data leakage to the Chinese government. This information comes from interviews conducted by cybersecurity firms Armis and Netskope.

Why do we care?

DeepSeek’s database exposure—over one million chat records, authentication keys, and infrastructure details—reinforces a pattern: AI providers are not securing user data properly. This is the same kind of misconfiguration issue that has plagued cloud services for years, but now it involves AI tools that businesses increasingly integrate into workflows.

DeepSeek’s database exposure is serious, but U.S.-based AI companies have had their share of breaches. OpenAI, Microsoft, and Google have all faced security incidents related to AI models or cloud misconfigurations. The real takeaway is that all AI vendors need to be scrutinized, not just those from China.

Database misconfigurations are a red flag. Organizations should demand transparency on how AI vendors protect user data.   If an AI tool stores data in a jurisdiction with aggressive intelligence laws, that poses a potential business and compliance risk.

This isn’t just about one chatbot—it’s a broader signal that AI security and data sovereignty are now front-line concerns in IT strategy.