Press "Enter" to skip to content

KnowBe4, Solix, RagaAI, Salesforce and Microsoft moves

Let’s run through some AI announcements.

KnowBe4 has launched its AI-native platform, AIDA, to enhance security measures and protect users from AI-based cyberattacks. AIDA automates the selection of security awareness training, providing personalized learning experiences to users. The technology was unveiled at KnowBe4’s KB4-CON and is expected to significantly improve defenses against various attacks as AI continues to be integrated into the company’s operations.

Solix is developing AI assistants to help customers interact with their active and archived enterprise app data. The company, founded 22 years ago, focuses on enterprise information archiving and works with applications such as ERP, CRM, and mainframes. Solix ingests and archives older documents and data from these applications, serving global Fortune 2000 companies in the banking, insurance, and pharmaceutical industries. The company is profitable and has experienced significant revenue growth.

RagaAI has opened access to AI testing by open-sourcing it, allowing developers to road-test their products and services in areas such as language models, natural language processing, and computer vision. The platform aims to prevent catastrophic AI failures by enabling developers to identify and correct errors before they cause significant problems.

Salesforce has introduced Slack AI, a generative AI experience that enhances productivity within Slack. The new feature set includes AI-powered search, channel recaps, and thread summaries, making it easier for users to access information and foster innovation.

Microsoft has introduced new AI capabilities in Teams Toolkit for Visual Studio, allowing developers to build, debug, and publish apps for Microsoft Teams. The latest version includes an AI bot template, Teams Bot test tool, Adaptive card previewer, CodeLens support in lifecycle steps, and direct access to documentation.

Microsoft has changed its Copilot AI tool after a staff AI engineer raised concerns about the generation of violent images. Certain prompts and terms, such as “pro-choice,” “pro-life,” and “four-twenty,” are now blocked. The tool also refuses to generate images of teenagers or kids playing assassins with assault rifles. Microsoft says it continuously monitors and strengthens safety filters to prevent system misuse.

Why do we care?

Sometimes it’s just about keeping track of the options.    That’s this.    AI enhanced security, AI assistants, AI testing tools, AI productivity tools, bit of everything here.  

Your role increasingly will be filtering and applying the right tools to the right job.