News

Eliminate AI Model Failure with Free Risk Database

Robust Intelligence has launched a new AI Risk Database that provides open source model validation to help organizations better understand the security, ethical, operational, and overall health of third-party AI models in a centralized location.

Introduction of AI Risk Database

The AI Risk Database, which is a community-supported and free-of-charge resource, contains comprehensive test results and corresponding risk scores for over 170,000 models. The database is aimed at assessing AI supply chain risks in an increasing number of public model repositories with the aim of mitigating model failure. Companies using open source models are exposed to considerable risk.

The Need to Evaluate AI Supply Chain Risk

With the availability of public model repositories such as Hugging Face, PyTorch Hub, TensorFlow Hub, NVIDIA NGC AI software hub, spaCy, Papers with Code, and others, sophisticated models have become widely accessible. However, Robust Intelligence has found that it’s incredibly difficult to assess any given model for security, ethical, and operational risks. Statements about model robustness and performance that are documented in public repos may be unsubstantiated.

Therefore, it is crucial that such models are thoroughly evaluated for risk before use so that developers recognize that in the same way as with any public software source, tools that collate risk are essential to validate the security and robustness of open-source models.

Initial Findings and the Importance of the AI Risk Database

The AI Risk Database promotes validation of open source models that are independent of any public model repositories. In hundreds of automated tests, supplemented by model vulnerability reports, the risk scores derived from dynamic analysis of the models have found initial analysis that revealed varying severity of vulnerabilities in tens of thousands of open-source models.

For instance, the analysis shows that in 50% of public image classifier models tested, 40% of adversarial attacks fail, and for natural language processing (NLP) models, in 50% of public models tested, 14% of adversarial text transformations fail. The analysis also revealed dozens of repositories with pytorch, tensorflow, pickle, or YAML resource files that include unsafe or vulnerable dependencies, which at minimum can make the user (and by extension their organization) susceptible to known vulnerabilities and in some cases enable actions including arbitrary code execution.

Conclusion

The AI Risk Database provided by Robust Intelligence is an essential resource for organizations that are experimenting with, or deploying, models from public model repositories. It offers a centralized location that covers the majority of public model repositories that enable AI developers to easily evaluate and investigate various models before use and AI researchers formally to report risks they identify. Since it is a community-supported resource, Robust Intelligence encourages contributions or feedback to airdb@robustintelligence.com.

Alex

Alex is a seasoned editor and writer with a deep passion for technology and startups. With a background in journalism, content creation, and business development, Alex brings a wealth of experience and a unique perspective to the ever-changing world of innovation. As the lead editor at Startup World, Alex is committed to discovering the hidden gems in the startup ecosystem and sharing these exciting stories with a growing community of enthusiasts, entrepreneurs, and investors. Always eager to learn and stay updated on the latest trends, Alex frequently attends industry events and engages with thought leaders to ensure Startup World remains at the forefront of startup news and insights. Alex's dedication and expertise help create an engaging platform that fosters knowledge-sharing, inspiration, and collaboration among tech-savvy readers worldwide.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button