Press "Enter" to skip to content

ChatGPT Isn’t Killing Google Search—And AI Lies More Than You’d Think

New data from Google indicates that search queries are continuing to grow, countering concerns that OpenAI’s ChatGPT is impacting Google’s business. The company reported over five trillion searches annually, with Barclays analysts estimating a growth rate exceeding 20 percent since ChatGPT’s launch in late 2022. Active users of ChatGPT have surged from one hundred million weekly in November 2023 to a reported four hundred million in February 2025. Despite these figures, analysts believe that OpenAI is not currently affecting Google’s search volume.

A new benchmark developed by researchers from the Center for AI Safety and Scale AI aims to measure how much artificial intelligence models lie. The Model Alignment between Statements and Knowledge benchmark, or MASK, evaluates the ability of AI models to knowingly deceive users. In testing 30 frontier models using over 1,500 queries designed to elicit lies, the researchers found that larger models do not necessarily correlate with higher honesty. For instance, Grok 2 exhibited the highest dishonesty rate at 63%, while Claude 3.7 Sonnet had the highest honesty rate at 46.9%. The researchers emphasized the risks posed by dishonest AI, including potential legal and financial harm, and highlighted the need for a reliable method to assess honesty in AI systems. The benchmark dataset is now publicly available, aiming to foster advancements towards more truthful AI.

Why do we care?

The data indicating that Google search queries continue to grow despite ChatGPT’s rise challenges the narrative that AI-powered chat interfaces are replacing traditional search. For MSPs and IT service providers and their customers, this signals that SEO strategies and online discoverability remain critical for businesses.

The MASK benchmark has direct implications for businesses deploying AI tools. The findings—that some AI models exhibit high dishonesty rates—underscore the risks of AI hallucinations, which are particularly concerning for industries reliant on accurate data, such as:

  • Legal and compliance sectors (where false information could have legal consequences).
  • Financial services (where incorrect AI-generated insights could lead to financial loss).
  • Cybersecurity and IT services (where deceptive AI responses could create security vulnerabilities).

These developments reinforce that AI isn’t a perfect replacement for existing tools—it’s an augmentation that requires oversight. IT service providers should position themselves as trusted advisors in AI governance, security, and business intelligence.