Generative AI is starting to get legal and regulatory attention. Reuters has reported that the US Copyright Office has determined that Images in a graphic novel that were created using the artificial intelligence system Midjourney should not have been granted copyright protection. The U.S. Copyright Office will not allow copyright registrations for generated content. Without new laws specific to the space, the expectation is that it will be determined by existing laws.
There’s also this blog analysis by University of North Carolina at Chapel Hill scholar Matt Perault – ChatGPT won’t be protected by Section 230 as social media—a prediction, of course, rather than law or rule.
The FTC has weighed in too, with a blog post of note. “Marketers should know that — for FTC enforcement purposes — false or unsubstantiated claims about a product’s efficacy are our bread and butter,” the agency said in a post. From the Washington Post: The agency laid out four potential abuses they plan to track: making exaggerated claims about what a product may do, making unsubstantiated promises about how AI makes a product better and perhaps costlier, failing to foresee and mitigate risks posed by the tool, and making baseless claims about the degree to which a company is actually using AI.
Wall Street banks pushing back too – quoting Bloomberg. “Bank of America Corp., Citigroup Inc., Deutsche Bank AG, and Wells Fargo & Co. are among lenders that have recently banned usage of the new tool, with Bank of America telling employees that ChatGPT and openAI are prohibited from business use, according to people with knowledge of the matter. “
Walmart has warned employees not to put confidential information into ChatGPT, with a new memo out.
With all the buzz around ChatGPT, Bing, and Bard, don’t sleep on Amazon. Politico notes that Amazon Science has released a large language model on Github that outperformed GPT 3.5 by 16% on a standard set of science questions and answers. Let’s not forget Meta too, who announced their LLaMA model. And the Information, who reports that Elon Musk is exploring forming a new lab to work on an alternative to ChatGPT. Zoom, too, with it’s Zoom IQ virtual agent, translation, captioning and meeting summary tools, all discussed in their earnings call this week.
And important to those in the IT services space, Microsoft is adding the AI powered version of Bing right into Windows 11, with full availability expected in the March 2023 monthly security update release. Also changing, Bing Chat now has the ability to choose a response style, such as Precise, Balanced, or Creative.
I mentioned the up-and-coming Prompt Engineer role last week – I spotted pieces in both Axios and the Washington Post that agree with me. There’s also an emerging space of companies helping, including prompt search engines or marketplaces for prompts.
And while it’s common for those in tech to be excited about the technology, here’s the broad data from the Washington Post.
A Monmouth University poll released last week found that only 9 percent of Americans believed that computers with artificial intelligence would do more good than harm to society.
When the same question was asked in a 1987 poll, a higher share of respondents – about one in five – said AI would do more good than harm, Monmouth said.
The Pew Research Center survey asked people different questions but found similar doubts about AI. Just 15 percent of respondents said they were more excited than concerned about the increasing use of AI in daily life.
That skepticism is playing out, too – from Vice. New York State’s Comptroller Tom DiNapoli released a report last week calling out NYC agencies for their lack of ethical and legal guardrails when it comes to using machine learning programs, which include algorithmic modeling, facial recognition, and other software used to monitor members of the public. The report states, “NYC does not have an effective AI governance framework. While agencies are required to report certain types of AI use annually, there are no rules or guidance on the actual use of AI.”
I do want to cite a positive use case today too – sales. Specifically car shopping. Axios reports on how Fiat and Kia are both exploring virtual showrooms, with common questions handled in pre-recorded video answers, and complex questions referred to humans… who can leverage ChatGPT to find the right answer.
Why do we care?
Putting the stories together helps provide insight – into the services opportunity like managing the regulation, legal and intellectual property risk, and the actual use of the tools. How will the use of AI tools impact the intellectual property created? How do you mitigate the risks? And how do you even use it responsibly and effectively?
The FTC guidance is real and already here, and now copyright guidance says these aren’t protected works. And this isn’t a slam dunk on consumer confidence either – we’re moving through the hype cycle pretty fast, but my belief is it will also be pretty inconsistent too. Highs and lows all intermixed.
This is the space to consider as a services firm – and it’s moving pretty fast already. You can move at whatever speed you want. Standing still is the speed that will cause you to be washed away.