Press "Enter" to skip to content

Navigating AI Detection & Policy Implementation

96

Navigating AI Detection & Policy Implementation

View this email in your browser

 

The weekly newsletter of the Business of Tech, giving you new insights into the world of IT service delivery. 

Looking for stories from the podcast stories?  Check out the pod itself on Apple Podcasts, Spotify, or daily in your inbox.   Stories are available to everyone for five days,and Patreon supporters forever.

Was this forwarded to you?  Join the list!

 

 
 

 

 

 

Navigating AI Detection & Policy Implementation

 

 
 

 

 

 

We all know that AI policies are on the horizon. Whether it’s rules within an organization or guidance from the government, sooner or later, we’re going to be involved in building in out workplace parameters. The obvious place to start is detection, but what does AI detection even look like right now?

I wanted to know more about this increasingly important facet of AI, so I welcomed Jon Gillham, the founder of detection service Originality.AI, onto a bonus episode of the Business of Tech.

Here’s what he thinks you should know about the why and how of implementing AI detection policies.

Addressing AI’s potential risks

We’ve had plenty of conversations with AI folks about the benefits of the tech, but Gillham’s work is focused on something a bit different: the intersection of AI and its potential physical impact on people. I asked him what inspired him to get into this niche, and he said his background in marketing was the main instigator. It’s one of the sectors where AI-generated writing is exploding, so he understood the value of when content is made by humans versus AI.

As for how AI can actually hurt people, Gillham cited one concerning example. He was involved in research with The Guardian, where a book recommended on Amazon was not only AI-generated, but also told forgers to eat a small bite of a mushroom if they’re unsure if it’s safe or not.

His main warning? Remember that AI really can hallucinate, and failing to involve humans with a tight AI policy can lead to all sorts of risk trajectories for companies.

Mitigating harm through effective detection

So, how can we implement detection to ensure we’re avoiding these kinds of harm for our customers? I asked Gillham for his recommendation, and he explained that each company should figure out its unique risk profile. Some, like marketers, need to be concerned about getting slapped by Google for posting AI content, while others, like law firms, need to factor in reputational harm.

In short:

“Companies need to have an understanding on where AI content is being produced in their organization, and then understand the risk associated with it. If it’s marketing content, if it’s legal content, if it’s contractual content, wherever writers are typing in words, if there’s an incentive for them to use AI, and that produces a risk for the company, they need to be understanding it,” he said.

Originality’s approach

You might be wondering – didn’t OpenAI attempt to offer detection? And wasn’t it ineffective?

I asked Gillham to explain his approach to detecting human vs. AI content, and he compared early detection services to ChatGPT itself. When the chatbot first came out, people overhyped its intelligence far beyond what it was actually capable of. Detection is similar because although it can do a lot, it’s not perfect.

Originality.AI is trained on both human and AI content to learn the difference between the two. They use classifiers to produce probabilities of AI involvement (kind of like how a weatherman doesn’t always work). On most open data sets they can test against, they’re usually about 99% accurate on AI content, with a 2.5% false positive rate.

For any detection skeptics out there, Gillham has this to say:

“You’re always balancing false positives with accuracy with these classifiers, so they’re never perfect…. For the misunderstanding that AI detectors don’t work – they do, depending on your use case and the accuracy that you require.”

Out of curiosity, I wanted to know how an AI detector would interpret Grammarly, which mainly corrects grammar. He said it would depend on the model. One of their models is designed for companies that say no AI, period; that one would detect human-created content that’s been edited by AI. For less strict companies, they also have a more standard model that doesn’t aim to detect light AI editing.

But in general, he confirmed that hybrid content – essentially human-generated writing cleaned up by AI – would typically end up classified as AI. As for what to do with that information, he again said parameters should be determined company by company. At his own company, for example, they let website content writers (who are human) use AI for editing support, but have chosen to list AI as a co-author on the articles:

“A paraphrasing tool would be able to turn any content into unique content. And so that’s why we classify what you just described as AI. It doesn’t mean it’s wrong. It just means that it should be classified as AI. And then our view is transparently communicating when it’s been a hybrid. The same way as if you had a co-author, you would credit your co-author,” he said.

How to use AI detectors effectively

But how exactly does Gillham recommend people use their detection tech? He suggests two things: focusing your parameters on the aspects of your business most impacted by AI’s risks

(not just broadly across the whole organization) and assigning policy enforcement to a specific level (almost like determining an editor).

“The organizations that have been the most effective at implementing this have been really clear upfront about what part of the organization they want to apply it to. When they try and apply it across the entire organization and try and feed everything in, it just doesn’t work… so get very clear on a small part of the organization that is the most at risk and apply detection within that part of the organization,” he said.

Then, make sure you know who’s responsible for overseeing AI’s application. Whatever policy gets enforced, make it clear who the editors in charge of policing are, and assign that person or team 100% responsibility for the words being used.

Company policies v regulatory requirements

To wrap things up, I asked where he thinks the future of AI policy is heading. In a future where AI detection is the norm, what’s the company, and what’s regulatory?

When the inevitable regulatory framework comes out, Gillham predicts it’ll mainly focus on the mediums with the most societal harm – photos and videos. With efforts poured into that, he thinks it’s going to come down to companies and private enterprises to decide what they want to police within their organization and how.

——

Are you ready to start implementing these parameters? Jonathan Gillham is the founder of Originality.AI, so you know where to start.

As always, my inbox is open, so feel free to reach out with stories, questions, or whatever else is on your mind.

More from MSP Radio

 

Missed Things? 

How about our latest videos to catch you up? 

The Daily Podcast available as videos

The Future of AI in Data Protection: A Conversation with Alcion CEO Niraj Tolia

The Evolution of Managed Services with Michael George, CEO of Syncro

Embracing Change: Lessons from a graduate’s journey

Responsible Exploit Disclosure: A New Perspective with MacKenzie Brown from Blackpoint Cyber

Engaging with Students for Talent Acquisition: A Guide for Small Businesses with Don Snyder

Driving Business Outcomes with Identity Solutions: Insights from SailPoint and IDMWorks

Want the Daily News?   

All the stories from the daily Business of Tech Podcast are available in the daily digest, and stories are available to everyone for the first five days, and Patreon supporters forever.  Catch the audio of the show anytime on Apple Podcasts, Spotify, YouTube, or wherever you find podcasts.  Links at businessof.tech

 

Copyright © 2024 MSP Radio, All rights reserved.

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.