
The rapid adoption of artificial intelligence (AI) is intensifying cloud security risks, making it vital for businesses to have strategies to protect AI in the cloud, says an Israeli cybersecurity firm.
'AI is fast becoming a major challenge for many organisations, with its use accelerating significantly,' Charles Kennaway, a major account executive for Southeast Asia at Wiz Inc, an Israeli-American cloud security firm, told the first Thailand-Israel Cybersecurity Workshop, held recently in Bangkok by the Israeli Embassy.
Cloud-based managed AI services, such as Amazon SageMaker, Azure AI, and GCP Vertex AI, are found in more than 70% of all cloud environments, indicating an incredibly fast adoption rate.
This surge in AI adoption mirrors the early days of cloud computing, creating massive adoption issues, while governance and security processes lag behind, said Mr Kennaway.
The company's 'State of AI in the Cloud in 2025' report found AI is now a key player in cloud operations, with managed AI services increasing from 70% to 74%, while 85% of organisations are using either managed or self-hosted AI services or tools.
'New players like DeepSeek have seen explosive growth due to cost-effectiveness and rapid innovation, but this 'AI gold rush' underscores that innovation should not compromise security,' he said.
According to Wiz, the growing use of generative AI (GenAI) is introducing unique cybersecurity threats, including data poisoning, where training data is manipulated to skew AI outputs; model theft; adversarial inputs that mislead AI; and model inversion attacks that extract sensitive training data.
Supply chain vulnerabilities also pose risks, particularly through third-party dependencies.
An emerging concern is the rise of 'vibe coding', where users with little coding experience rely on AI to generate code. While convenient, this can lead to insecure applications, as critical practices like secrets management are often overlooked by non-experts.
The rise of AI also adds complexity to cloud security, especially in multi-cloud environments, noted the firm.
Meanwhile, decentralised ownership limits visibility, making it harder for security teams to monitor threats.
Sensitive data is highly exposed, with public information breaches possible in less than eight hours, noted Wiz. Teams also face alert overload and burnout, making it difficult to prioritise real risks effectively.
Fighting 'shadow AI'
Mr Kennaway said that to address GenAI security challenges, organisations must adopt a proactive and agile strategy, focusing on eliminating 'shadow AI' by gaining visibility into all AI usage, preventing unauthorised tools, educating users, and tracking AI assets through an AI Bill of Materials (AI BOM).
AI BOM is a complete inventory of all the assets in an organisation's AI ecosystem, documenting datasets, models, software and hardware across the entire life cycle of AI systems. These details provide the visibility that organisations need to secure AI systems.
Shadow AI refers to the unauthorised use of AI tools and applications within an organisation without the knowledge or approval of the IT department.
He said enterprises should also ensure sensitive data is not exposed via unsecured AI tools by using an AI and data security posture management approach to monitor and secure data continuously.
Firms should use the built-in safety features of large language models, such as content filtering and abuse detection, to reduce risk at the source, said Mr Kennaway.
Moreover, enterprises should detect and remove attack paths by conducting continuous vulnerability scans and audits to identify and remediate risks proactively. In addition, they should create a dedicated AI security response team among existing security operations for quick issue containment.
Provided by SyndiGate Media Inc. (Syndigate.info).