LinkedIn’s new report, “AI at Work,” analyzes trends in AI and highlights important shifts, such as a 21-time rise in job postings mentioning AI and a 47% increase in US executives who believe AI enhances productivity. LinkedIn predicts that generative A.I. could help workers be more productive as it takes over less human-based skills. The report suggests that A.I. will accelerate the change in skills required to do jobs rather than completely replacing them, and people skills will become even more valuable. The top five jobs predicted to be affected by A.I. are software engineer, customer service representative, salesperson, cashier, and teacher.
A new report from the IBM Institute for Business Value found that workforce reskilling is a major challenge for businesses deploying generative AI. Over half of executives surveyed estimate that 2 in 5 of their workers will need to reskill due to AI and automation over the next three years. 87% of executives expect generative AI to augment roles rather than replace them. The study also found that those who successfully reskill to adapt to technology-driven job changes report a revenue growth rate premium of 15% on average, and those who focus on AI see a 36% higher revenue growth rate than their peers.
A survey by AMD found that 3 out of 4 IT leaders are optimistic about the potential benefits of AI in the workplace, including increased employee efficiency and automated cybersecurity solutions. However, only half of the respondents believe their organizations are sufficiently equipped to adopt AI technologies, and over 50% have not yet experimented with the latest natural language processing technology.
In more bad uses of AI, The Mason City Community School District in Iowa is using ChatGPT to determine which books to ban from school libraries to comply with Republican-backed state laws. Administrators input the query, “Does [book] contain a description or depiction of a sex act?” into ChatGPT for each book that is commonly challenged. However, the answers generated by the tool are sometimes contradictory based on how users prompt and query the software.
A recent paper covered extensively by media outlets, including the Washington Post, cited that ChatGPT expresses liberal opinions. Upon further research, the study did not actually test ChatGPT (and instead an older API model) and used an artificially constrained prompt. When directly asked about political opinions, ChatGPT refused to opine in most cases. The concern of political bias in chatbots is real, but it is also complex and sensitive to the prompt and user interaction.
Why do we care?
Bad uses of AI should be another podcast, but it’s helpful to highlight where unthinkingly trusting the output will cause trouble. These last two stories show how important using technology well will be. Prompt engineering is a thing – the differences in how AI is queried will matter. Everything about these surveys highlights the training, skills gap, and thus opportunity. Some of this will get handled by effective product use… which is still a training offering.