Tenable Cloud AI Risk Report 2025 – Talking Points

KEY TAKEAWAYS:

  • Cloud AI workloads aren’t immune to vulnerabilities: Approximately 70% of cloud AI workloads contain at least one unremediated vulnerability. In particular, Tenable Research found CVE-2023-38545—a critical curl vulnerability—in 30% of cloud AI workloads.
  • Jenga-style cloud misconfigurations exist in managed AI services: 77% of organizations have the overprivileged default Compute Engine service account configured in Google Vertex AI Notebooks. This means all services built on this default Compute Engine are at risk. 
  • AI training data is susceptible to data poisoning, threatening to skew model results: 14% of organizations using Amazon Bedrock do not explicitly block public access to at least one AI training bucket and 5% have at least one overly permissive bucket.
  • Amazon SageMaker notebook instances grant root access by default: As a result, 91% of Amazon SageMaker users have at least one notebook that, if compromised, could grant unauthorized access, which could result in the potential modification of all files on it. 
  • Azure users lead AI adoption in the cloud: 60% of Microsoft Azure users have configured its Cognitive Services, 25% of AWS users have configured Amazon Sagemaker and 20% have configured Amazon Bedrock, and 20% of GCP users have configured Google Vertex AI Notebook.

KEY MESSAGES:

  • The security of AI systems is critical for long-term business success.
    • When we talk about AI usage in the cloud, more than sensitive data is on the line. 
    • If a threat actor manipulates the data or AI model, there can be catastrophic long-term consequences, such as compromised data integrity, compromised security of critical systems and degradation of customer trust.
  • Cloud-based AI has its security pitfalls.
    • Cloud AI services and frameworks are vulnerable, and often misconfigured, publicly exposed and overly permissive. 
    • Attackers seek to compromise AI models, manipulating their input data and the outputs they produce, exposing sensitive information and causing models to behave in undesirable ways.
      • Overprivileged identities, exposed storage buckets, lack of auditing and lack of encryption are just some of the misconfigurations that open the door to such compromise.
      • Training and testing data is an attractive target for misuse and exploitation, as they may contain real information such as intellectual property, personal information (PI), personally identifiable information (PII) or customer data related to the nature of the AI project. 
  • As part of a mature exposure management strategy, security stakeholders must understand these AI risks and take proactive steps to secure their AI tools and resources and prevent them from creating risky exposures to their cloud environment.
    • Top Recommendations:
      • Manage the exposure of your AI systems and data. Take a contextual approach to reveal exposure across your cloud infrastructure, identities, data, workloads and AI tools. Monitor all assets and integrate telemetry and security configurations on-prem and in the cloud. Unified visibility and prioritized actions across the attack surface enable managing risk as environments change and AI threats evolve. 
      • Classify all AI components linked to high-business-impact assets as sensitive. Include AI tools and data in your asset inventory, scanning them constantly and understanding the risk if exploited. Even test data can be sensitive; if leaked, the risk is as for production data.
      • Prevent unauthorized or overprivileged access to cloud-based AI models/data stores. Reduce excessive permissions and tightly manage cloud identities using robust tools for least privilege and security posture. These should also detect and Jenga-style misconfigurations in your cloud AI environment.
      • Prioritize vulnerability remediation by impact. Understand which CVEs have the greatest risk severity in your environment. Less sophisticated cloud security solutions can bombard with team notifications; advanced tools improve remediation efficiency and effectiveness, and reduce alert fatigue. 
  • Vulnerabilities
    • Over two-thirds of the cloud workloads we analyzed with an AI package installed also have a critical vulnerability, compared with 50% of cloud workloads without AI installed.
      • One possible reason for the higher incidence of critical vulnerabilities is that many AI workloads run on Unix-based systems, which run many libraries, including open source, and for which vulnerabilities are often reported. 
    • In particular, we observed the critical curl vulnerability in more than one-third of the cloud AI workloads we analyzed.
      • The vulnerability, when exploited, can lead to unintended access to a rogue server. 
  • Misconfigurations and Excessive Permissions
    • More than three-quarters (77%) of the organizations we studied have the overprivileged default Compute Engine service account configured in GCP Vertex AI Notebooks.
      • The Jenga Concept, introduced by Tenable Cloud Research, describes cloud providers’ tendency to build one service on top of another, with any single misconfigured service putting all the services built on top of it at risk.
        • Users are largely unaware of the existence of these behind-the-scenes building blocks, as well as any propagated risk.
    • The vast majority (90.5%) of organizations that have configured Amazon SageMaker have the risky default of root access enabled in at least one notebook instance.
      • Granting root access to Amazon SageMaker notebook instances introduces unnecessary risk by providing users with administrator privileges.
      • Failure to properly adhere to the principle of least privilege significantly increases the risk of unauthorized access, enabling attackers to exfiltrate AI models
    • A small but important portion (5%) of the organizations we studied that have configured Amazon Bedrock have at least one overly permissive training bucket.
      • An attacker with access to the environment through a prior breach or public exposure can easily move laterally, compromise training buckets and steal proprietary AI training data. 
  • Public Exposure
    • The Amazon S3 Block Public Access feature is designed to prevent unauthorized access and accidental data exposure. However, Tenable Research identified instances in which Amazon Bedrock training buckets lacked this protection — a configuration that increases the risk of unintentional excessive exposure
      • Among the organizations that have configured Amazon Bedrock training buckets, 14.3% have at least one bucket that does not have Amazon S3 Block Public Access enabled.
    • Such oversights can leave sensitive data vulnerable to tampering and leakage — a risk that is even more concerning for AI training data, as data poisoning is highlighted as a top security issue in the OWASP Top 10 threats for machine learning systems.

ABOUT THE STUDY:

  • Tenable Cloud Research created this report by analyzing the telemetry gathered from 3.6 million workloads in active production across diverse public cloud and enterprise landscapes. Data was collected between December 2022 and November 2024. 
  • This analysis aims to highlight the current state of security risks in cloud AI development tools and frameworks and in AI services offered by the three major cloud providers — Amazon Web Services, Google Cloud Platform and Microsoft Azure.

Author