Press "Enter" to skip to content

The Struggle of Startups and Companies with Generative AI: Not the Panacea Once Imagined

Well, I’m not alone concluding that we’ve moved quickly through the hype cycle.  Per reporting in Axios,Generative AI, particularly chatbots like ChatGPT, faces criticism and challenges, putting the technology in the “trough of disillusionment.” Issues include embarrassing errors, concerns about intellectual property infringement, cost, environmental impact, and more. Some startups in the field are struggling, and companies find that generative AI is not the panacea they once believed.   Here’s a great quote.

Gary Marcus, a scientist who penned a blog post last year titled “What if generative AI turned out to be a dud?” tells Axios that, outside of a few areas such as coding, companies have found generative AI isn’t the panacea they once imagined.

“Almost everybody seemed to come back with a report like, ‘This is super cool, but I can’t actually get it to work reliably enough to roll out to our customers,'” Marcus said.

That said, per data from the Pew Research Center, the use of ChatGPT among Americans is increasing, with 23% of U.S. adults reporting that they have used it. However, regarding information about the 2024 U.S. presidential election, the public is skeptical, with only 2% expressing a great deal or quite a bit of trust. The survey also reveals differences in usage based on age and education and concerns about misinformation from chatbots in the context of elections.

And while I’m on the company, TechCrunch reporting that OpenAI’s GPT Store, the marketplace for custom chatbots powered by OpenAI’s generative AI models, is facing issues with spam and copyright infringement. The store is flooded with GPTs that generate art in the style of popular franchises, bypass AI content detection tools, and promote academic dishonesty. There are also concerns about impersonation and attempts to jailbreak OpenAI’s models. The rapid growth of the GPT Store has come at the expense of quality and adherence to OpenAI’s terms.

The National Telecommunications and Information Administration (NTIA) has released a report calling for independent audits of “high-risk” uses of artificial intelligence (AI). The report recommends improved transparency, independent evaluations, and consequences for imposing new risks. The NTIA believes accountability policies will boost public and marketplace confidence in AI systems. Assistant Secretary of Commerce Alan Davidson emphasized the need for AI accountability and an auditing ecosystem, drawing inspiration from financial auditing practices.

Why do we care?

It’s shocking that technology isn’t the solution to every problem!   Shocking, I say!     We care because navigating this is the value of an MSP and IT solution provider.    What you are looking for is to apply technology to your customer base in a way that makes them more money.  It’s actually that simple and entirely tricky to do.    Which application balances the risks correctly of precisely what was named – errors, intellectual property infringement, cost, environmental impact, and more?     I’ll keep reporting on good use cases to give examples.