Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

AI-as-a-Service Platform Patches Critical RCE Vulnerability

Hackers Could Exploit Bug on Replicate to Steal Data, Manipulate AI Models
AI-as-a-Service Platform Patches Critical RCE Vulnerability
Hackers could have penetratred an artificial intelligence-as-a-platform service and accessed models and data. (Image: Shutterstock)

Attackers could have exploited a now-mitigated critical vulnerability in the Replicate artificial intelligence platform to access private AI models and sensitive data, including proprietary knowledge and personal identifiable information.

See Also: OnDemand | Cyber Threats in Financial Services: An Adversary-Focused Strategy Beyond Compliance

Replicate enables companies to share and interact with AI models. They can browse existing models on the hub or upload their own. It also helps customers host private models by providing an inference infrastructure.

Exploitation of the vulnerability would have allowed unauthorized access to Replicate customers' AI prompts and the results and potentially alter those responses, said Wiz researchers in a blog post. These actions result in the manipulation of AI behavior and compromise the decision-making processes of these models, which threatens the accuracy and reliability of AI-driven outputs. That can lead to untrustworthiness of the automated decisions and have "far-reaching consequences" for users that depend on the compromised models.

AI models are essentially code, and running untrusted code in shared environments can affect the customer data stored or accessible through all the involved systems.

In Replicate's case, attackers could execute code remotely by creating a malicious container in a format proprietarily used to containerize models on Replicate. Wiz researchers did the same. They created a malicious container in the Cog format and uploaded it to the platform. With root privileges, they used it to execute code on Replicate's infrastructure, allowing them to move laterally in the environment and eventually carry out a cross-tenant attack.

The flaw also highlights the difficulty of tenant separation in AI-as-a-service solutions, particularly in environments that run AI models from untrusted sources, the researchers said. The potential impact of a cross-tenant attack on AI systems is "devastating, as attackers may be able to access the millions of private AI models and apps stored within AI-as-a-service providers."

The researchers discovered the vulnerability as part of ongoing research in which the security company discloses patched bugs of AI-as-a-service providers it has partnered with to test their platforms. Wiz researchers found last month a vulnerability on the Hugging Face AI platform that has an impact similar to the one discovered on Replicate.

The Wiz team suspected the code execution technique was a pattern in which organizations run AI models from untrusted sources, even when the models are essentially code that could potentially be malicious. "We used the same technique in our earlier AI security research with Hugging Face and found it was possible to upload a malicious AI model to their managed AI inference service and then facilitate lateral movement within their internal infrastructure," the researchers said.

There is currently no sure-fire way to validate the authenticity of an AI model or scan it for security threats beyond regular code testing.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.eu, you agree to our use of cookies.