CVE Allocation: Why AI Models Should Be Excluded




James Ding
Sep 26, 2025 19:58

Explore why Common Vulnerabilities and Exposures (CVE) should focus on frameworks and applications rather than AI models, according to NVIDIA’s insights.





The Common Vulnerabilities and Exposures (CVE) system, a globally recognized standard for identifying security flaws in software, is under scrutiny concerning its application to AI models. According to NVIDIA, the CVE system should primarily focus on frameworks and applications rather than individual AI models.

Understanding the CVE System

The CVE system, maintained by MITRE and supported by CISA, assigns unique identifiers and descriptions to vulnerabilities, facilitating clear communication among developers, vendors, and security professionals. However, as AI models become integral to enterprise systems, the question arises: should CVEs also cover AI models?

AI Models and Their Unique Challenges

AI models introduce failure modes such as adversarial prompts, poisoned training data, and data leakage. These resemble vulnerabilities but do not align with the CVE definition, which focuses on weaknesses violating confidentiality, integrity, or availability guarantees. NVIDIA argues that the vulnerabilities typically reside in the frameworks and applications that utilize these models, not in the models themselves.

Categories of Proposed AI Model CVEs

Proposed CVEs for AI models generally fall into three categories:

  1. Application or framework vulnerabilities: Issues within the software that encapsulates or serves the model, such as insecure session handling.
  2. Supply chain issues: Risks like tampered weights or poisoned datasets, better managed by supply chain security tools.
  3. Statistical behaviors of models: Features such as data memorization or bias, which do not constitute vulnerabilities under the CVE framework.

AI Models and CVE Criteria

AI models, due to their probabilistic nature, exhibit behaviors that can be mistaken for vulnerabilities. However, these are often typical inference outcomes exploited in unsafe application contexts. For a CVE to be applicable, a model must fail its intended function in a way that breaches security, which is seldom the case.

The Role of Frameworks and Applications

Vulnerabilities often originate from the surrounding software environment rather than the model itself. For example, adversarial attacks manipulate inputs to produce misclassifications, a failure of the application to detect such queries, not the model. Similarly, issues like data leakage result from overfitting and require system-level mitigations.

When CVEs Might Apply to AI Models

One exception where CVEs could be relevant is when poisoned training data results in a backdoored model. In such cases, the model itself is compromised during training. However, even these scenarios might be better addressed through supply chain integrity measures.

Conclusion

Ultimately, NVIDIA advocates for applying CVEs to frameworks and applications where they can drive meaningful remediation. Enhancing supply chain assurance, access controls, and monitoring is crucial for AI security, rather than labeling every statistical anomaly in models as a vulnerability.

For further insights, you can visit the original source on NVIDIA’s blog.

Image source: Shutterstock




#CVE #Allocation #Models #Excluded

Leave a Reply

Your email address will not be published. Required fields are marked *