96
Applying frameworks to AI
View this email in your browser
The weekly newsletter of the Business of Tech, giving you new insights into the world of IT service delivery.
Looking for stories from the podcast stories? Check out the pod itself on Apple Podcasts, Spotify, or daily in your inbox. Stories are available to everyone for five days,and Patreon supporters forever.
Was this forwarded to you? Join the list!
Applying frameworks to AI
As we all look for innovative ways to package our services and meet the demands of the evolving AI landscape, I’m a firm believer that small business technology consultants will have a lot of success implementing AI frameworks.
Even if you’re not quite ready to share them with clients yet, the time has definitely come to start chewing over the details of these potential offerings. To get a better sense of how to build these out, I welcomed Juliette Powell and Art Kleiner onto a bonus episode of the Business of Tech.
Powell and Kleiner, who met as professors at NYU Stern, recently co-authored The AI Dilemma, a new book outlining the many facets of ‘responsible AI.’
Want to know what the experts think about the future of effective AI implementation? From ethical data sourcing to creative friction, here’s what they shared:
Outlining the Seven Principles of Responsible Technology
Powell’s idea for what became The AI Dilemma started with her dissertation. She went back to grad school to learn something new after her mother’s passing, and she found herself focused on the ethics of tech.
When Powell decided to turn her dissertation into a book, she reached out to Kleiner, whom she’d met when he was the editor-in-chief of Strategy and Business Magazine. He immediately saw the value in her idea, and was already working with technology book publishers, so they teamed up as co-authors quite quickly.
So, what are Powell and Kleiner’s seven principles of responsible technology?
While writing the book, the AI Bill of Rights was in its blueprint phase in the US, but was more of a blueprint than anything. Instead, they saw a lot of value in the Artificial Intelligence Act coming out of the EU, which was way further developed and focused on risk.
So, risk became their first principle (they thought about a ‘do no harm’ value similar to the medical field, but figured technologists already care about benefiting people in their work).
More specifically, the risk principle focuses on this question: how can we be intentional about our risk to humans in particular?
With that first principle settled, Powell and Kleiner turned to software engineers and people doing technical work with clients to finalize the rest.
Let’s run through some of the ones they shared:
Openness: they believe technologists need to open the closed box, asking questions like, is the system transparent? Can people see how the data was gathered and put together? Can people query the company? What is the company’s policy around revealing trade secrets?
Bias: technologists need to confront and question the bias that exists in any human endeavor, which is now being codified and automated by AI.
Data ownership: when engineers gather data, what liability do they have? Can you identify the source of the data? Whose business created the data? Are there mechanisms in place to track the data’s origins?
Accountability: Anybody involved should be thinking about what the outcomes are and be held responsible for it.
Organizational structure: what kinds of organizations are best equipped to handle AI? If organizations are too rigid, they won’t be able to roll with the punches and make diverse decisions.
Creative friction: technologists need to bring in people from different disciplines, schools of thought, and cultures to develop technology that works for as many people as possible.
On that last one, they really hammered home the importance of including different perspectives in AI implementation. They cited studies that show the world’s most productive teams with the highest earners include people from diverse disciplines. They may move slower than less diverse teams, but the outputs are much stronger.
I personally really liked this final principle – I’ve long believed that diverse teams produce better results because they understand their customer base better and shake up those ideas. Why not apply that reasoning to AI too?
The Role of the Technology Partner
I hope we can all agree that everyone implementing AI in a business setting needs to do all of this to ensure they’re doing it well. But that’s a lot of responsibility for leadership to handle alone.
I asked Powell and Kleiner if they think a technology partner will be needed to make this happen, and they confirmed that folks like us will absolutely be necessary.
They highlighted two specific areas where folks like us will be especially important: analyzing risk (their first principle), and security protocol.
Kleiner brought up an interesting point for that first one; he thinks that small businesses often believe they’re good at ‘creative friction’ because they’re used to hammering out arguments in smaller settings. However, that style of decision-making doesn’t apply to AI because AI tools are designed to trick users into thinking they’re forward-thinking. Enter the technology partner, who can keep an eye on the misinformation, hallucinations, and degradation of data quality in AI outputs.
As for security protocol, they flagged that technology partners will have to create proactive processes for keeping things safe, especially when using third-party generative AI tools. That goes back to the earlier principles surrounding data transparency.
Their input here confirms my own take on this topic: the better a business is with these principles and the ethical framework and responsible AI framework, the better their competitive advantage will be because they’ll be better at executing.
What Technology Partners Can Do Now
I wanted to know a bit more about the preparation process for working with these principles. What do we and our clients need to do before having the implementation conversation?
The pair brought up ‘psychological safety’ here, which is important for any organization, but even more important for AI.
“To have people being able to speak up and talk about the potential negative consequences is really, really important before a project ever gets designed, let alone deployed. I think that having those negative scenarios in mind while you’re coding and while you’re collecting data, creating the model, et cetera, makes all the difference in the world,” said Powell.
Kleiner also brought up self-auditing with questions like:
Where is your data coming from?
What AI systems do you have in place?
How do those interface with your other digital systems?
As for bias, what are the practices and outcomes that you’re not paying attention to?
“Ultimately, a lot of this is basic organizational change, but now we’re working with tech people to say, how do you communicate this to people who are not used to talking the way that you do? People who come from different backgrounds making business decisions as opposed to engineering decisions?” said Kleiner.
To wrap things up, they shared four types of logic that technology partners need to think about bringing together:
Corporate logic (profit)
Government logic (protecting citizens, tracking bad actors)
Engineering logic (efficiency)
Social justice logic (building for the human race, not just the techies)
Now that’s one big homework assignment. For more from these two, head to www.KleinerPowell.com and check out their book The AI Dilemma. They very generously mentioned that if you can’t afford the book, get in touch, and they’ll get you a copy.
That’s it for this week! We’ll be back soon with more. In the meantime, as always, I’m available to connect.
More from MSP Radio
Missed Things?
How about our latest videos to catch you up?
The Daily Podcast available as videos
The Future of Distribution: Predictions and Perspectives from Industry Leaders
Exploring Autonomous Project Monitoring and Management with Mike Psenka
Streamlining Business Operations through AI Automation with Uzair Ahmed
Insights into Product Management and Revenue Growth with Jessica Nelson Kohel
Exploring the Role of Transformational Leadership and Small Bets in Tech with Jen Swanson
Want the Daily News?
All the stories from the daily Business of Tech Podcast are available in the daily digest, and stories are available to everyone for the first five days, and Patreon supporters forever. Catch the audio of the show anytime on Apple Podcasts, Spotify, YouTube, or wherever you find podcasts. Links at businessof.tech
Copyright © 2024 MSP Radio, All rights reserved.
Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.