Press "Enter" to skip to content

From Encrypted Chats to AI Slip-Ups—More in “What Could Possibly Go Wrong?”

Democratic senators questioned top national security officials, including Director of National Intelligence Tulsi Gabbard and CIA Director John Ratcliffe, during a Senate Intelligence Committee hearing regarding their involvement in a Signal chat that discussed military plans, which reportedly included a journalist. Senators expressed concern over the potential leak of sensitive information and criticized the “incompetence” of including a reporter in the discussions. Gabbard and Ratcliffe frequently responded with “I don’t recall” when asked about specific military targets or operational details, raising doubts among senators about whether classified information was discussed. Virginia Senator Mark Warner emphasized the potential danger of releasing such information, stating that it could jeopardize American lives.  As this pod was being recorded, the Atlantic released the transcripts.

OpenAI is facing a new privacy complaint in Europe regarding its AI chatbot, ChatGPT, which has been accused of generating false information, including a claim about a Norwegian individual being convicted of child murder. The complaint, supported by the privacy rights group Noyb, highlights that ChatGPT can produce defamatory content that violates the European Union’s General Data Protection Regulation, or GDPR. Joakim Söderberg, a data protection lawyer at Noyb, emphasized that under GDPR, personal data must be accurate, and users have the right to rectify false information. Confirmed breaches of GDPR can result in penalties of up to four percent of a company’s global annual turnover. The complaint follows previous issues where ChatGPT produced incorrect personal data without a means for individuals to correct it.

Clearview AI has reached a settlement in a class-action privacy lawsuit, estimated to be worth over fifty million dollars. A federal judge approved the settlement, which addresses allegations that the company violated the privacy rights of millions of Americans by scraping their facial images from the internet without consent. Notably, the settlement structure allows plaintiffs and their lawyers to have a stake in Clearview’s future value rather than receiving a one-time payment. This approach stems from the company’s financial constraints, as nearly all Americans could be considered class members due to their online presence. The case was tried in Illinois, where Clearview faced accusations of infringing upon the state’s Biometric Privacy Act. Despite the settlement, the company does not admit liability. Additionally, twenty-two state attorneys general expressed concerns that the settlement may not sufficiently prevent future violations. In a separate agreement in 2022, Clearview pledged to limit access to its database for private companies and Illinois government agencies for five years.

Why do we care?

The controversy is a case study in what happens when high-level personnel use unofficial or uncontrolled communication channels—even when those platforms are encrypted. Signal may be secure, but introducing non-cleared participants (e.g., journalists) into the chat is a major failure of operational controls. The same issue applies in enterprises where execs default to personal messaging apps.   See my detailed analysis yesterday.

Generative AI can fabricate false statements about real people, but most tools—including ChatGPT—do not yet support real-time data rectification or deletion as required under GDPR. For IT consultants integrating these tools into customer-facing applications, this is a legal exposure point.

Clearview’s legal troubles stem from scraping images without consent, a tactic still common among data-hungry startups. IT buyers must ask: is the model or dataset powering this tool compliant with relevant data and biometric laws?