Press "Enter" to skip to content

G7 Nations Agree on Risk-Based AI Regulation while ChatGPT Returns to Italy After Ban

The Group of Seven Nations agreed over the weekend to use a “risk-based” regulation approach for AI. Such regulation should also “preserve an open and enabling environment” for developing AI technologies and be based on democratic values.  Meanwhile, the EU has an early agreement on its AI copyright rules.    Companies deploying generative AI tools, such as ChatGPT, must disclose any copyrighted material used to develop their systems.   Under the proposals, AI tools will be classified according to their perceived risk level: minimal to limited, high, and unacceptable. Areas of concern could include biometric surveillance, spreading misinformation, or discriminatory language.

While high-risk tools will not be banned, those using them must be highly transparent in their operations.

And, ChatGPT is back in Italy after a brief ban.   OpenAI said it had “addressed or clarified” the issues the Italian Data Protection Authority (or GPDP) raised in late March. OpenAI set up a new form that EU users can submit to remove personal data under Europe’s General Data Protection Regulation (GDPR). It also says that a new tool will verify users’ ages upon signup in Italy. It published a help center article outlining how OpenAI and ChatGPT collect personal information, including information about contacting its GDPR-mandated data protection officer.

In detail, out of the earnings calls last week, Amazon said it would be shifting its spending from its retail business to its Amazon Web Services division in part to power the type of artificial intelligence behind chatbots like ChatGPT.  That from the Information.    Also of note, CEO Andy Jassy did address the appearance that Alexa was further behind, citing that the company is working on a larger language model, although no timeline was discussed.  

Speaking of ChatGPT, the company has added the ability to switch off chat history when using the chatbot and won’t use those conversations to train its models.     Additionally, the Wall Street Journal profiles some pilot programs at three health systems looking to see if the AI model can assist doctors with patient responses.     

According to a Forbes Advisor survey of more than 2,000 Americans in April, 77% were concerned AI would cause job loss within the next 12 months, with 44% stating they were “very concerned” and 33% “somewhat concerned.”

Similarly, 75% of respondents are concerned about AI-induced misinformation, even as 65% said they plan to use ChatGPT—an AI language chatbot—instead of traditional search engines like Google when searching online for information.

Why do we care?

Copyright and privacy laws will be critical in using AI-generated content.     How will the data be used?  Who owns it?    The balance providers are looking to strike right now is helping customers experiment with the uses of the tools while still ensuring that the business is protected.    It’s a balancing act – then again, you’re not paid if it’s easy.