Press "Enter" to skip to content

OSI’s Open Source AI Definition 1.0 Sets New Benchmark for Transparency, Targeting ‘Open in Name Only’ Models

The Open Source Initiative has released version 1.0 of its Open Source AI Definition, a standard that aims to clarify what constitutes open source AI. To qualify, an AI model must provide sufficient information about its design and disclose details about its training data, including its provenance and processing methods. OSI Executive Vice President Stefano Maffulli emphasized the need for consensus among policymakers and developers, especially as regulators begin to scrutinize the AI space. Despite this new definition, many AI models labeled as open source, such as those from Meta and Stability AI, do not fully meet these criteria. A Signal Foundation and Carnegie Mellon study found numerous so-called open source models are essentially open in name only, highlighting concerns over access and transparency in AI development. The OSI plans to monitor the implementation of this definition and propose updates as necessary.

Why do we care?

IT leaders and developers integrating AI into products now have a more explicit framework to assess models for true openness. Prioritizing models that adhere to OSI’s standards can reduce reputational risk and legal liabilities tied to unverified or biased datasets.

MSPs serving clients in sectors sensitive to data provenance—such as finance, healthcare, or government—could benefit from aligning with truly open models to address client concerns about data integrity and regulatory compliance.

For organizations that rely heavily on open-source AI, this standard could be used as a guidepost to filter out models that lack true transparency.