Why traditional firewalls don’t offer adequate protection for AI models

AI is already leaving its impact on numerous industries with many organisations launching AI tools and using AI to enhance their existing services. But as with any advancement in technology, AI can be misused in the wrong hands. Cyber criminals are not only using AI to launch their own attacks, they are also targeting the AI models themselves to produce malicious results.

Protecting AI models means rethinking traditional security measures and some solution providers have launched firewalls designed with protecting AI models, such as LLMs, in mind. Traditional firewalls scan network traffic and block anything that looks malicious based on specific rules. For LLMs, which use natural language, spotting malicious requests can be challenging, and the responses produced by a model can vary greatly. Furthermore, the code and database are closely linked, with responses generated from the data used to train models. As such, traditional firewalls do not offer adequate protection by simply blocking network traffic on the data plane.

Sensitive data is gold dust to cyber criminals, and adversaries can also reverse engineer the results of an AI model to infer details about the original dataset used to train it, leading to data breaches and the theft of intellectual property. Attackers can also use API queries to replicate the AI models themselves.

While traditional layers of security such as encryption, perimeter security, and continuous monitoring are still important, on their own they do not offer sufficient protection against the specific attacks targeting AI models. As such, they must be combined with AI-specific cyber security solutions.

There are several steps organisations can take to ensure their AI models are not left vulnerable. For example, regularly auditing data to ensure it has been sanitised before it is fed to their AI model. Implementing strict access controls and identifying and resolving API vulnerabilities can also prevent misuse. It is also important to evaluate outputs from AI models against sensitive data leakage. Furthermore, AI itself can be harnessed to spot and prevent malicious activity through anomaly detection and adversarial pattern recognition.[MOU3]

Firewalls specifically designed to protect AI models can help prevent potentially harmful prompts and requests from impacting models and producing inaccurate or biased results, or revealing sensitive data.

As AI evolves at a rapid pace, so do adversaries wishing to misuse it. Therefore implementing security measures designed with AI in mind is a must for organisations to continue benefitting from the technology without introducing additional risk.

This article is sponsored by Akamai

Close