Blog

5 things you need to know to build a firewall for AI

Author icon
by
Brian Hutchins
,
May 13, 2024
5 things you need to know to build a firewall for AI5 things you need to know to build a firewall for AI
Brian Hutchins
May 13, 2024
Icon - Time needed to read this article

Everywhere we look, organizations are harnessing the power of large language models (LLMs) to develop cutting-edge AI applications like chatbots, virtual assistants, and more. Yet even amidst the fast pace of innovation, it’s crucial for security teams and developers to take a moment to ensure that proper safeguards are in place to protect company and customer data.

What are the risks of AI model consumption?

Perhaps you’re using public GenAI services from OpenAI or Google, or you’re hosting your own fine-tuned public model like LLAMA. Either way, OWASP has named sensitive data exposure as a considerable risk of AI consumption, in part due to the following risks.

  • Human error may include instances where a customer or employee accidentally submits sensitive data like a credit card number or API key as part of their prompt. That sensitive data would then be transmitted to the LLM provider or captured in data pipelines, where it could be stored and used in a model training dataset.
  • Malicious intent could span any number of attacks, from prompt injection to data poisoning to jailbreaking and beyond.

Considering these risks, it’s vital to make sure that AI model inputs and outputs are free of sensitive data like PII, PCI, PHI, secrets, and IP.

Learn more about how you can secure your AI.

What’s so difficult about detecting sensitive data?

At the enterprise scale, it can be difficult to scan AI inputs and outputs without a huge volume of false positive alerts. With this in mind, it’s important to deploy a scanning tool with high recall and precision. Think of these two metrics in the context of a “needle in the haystack” problem:

  • Recall: “There are X number of needles in the haystack. How many of them did you find?”
  • Precision: “How many of the detected needles are actually needles?”

Using a data scanning tool with high recall and precision, security teams can discover granular instances of sensitive data, and can remediate them more quickly since they’re less inundated with false positive alerts. On the flip side, low recall and precision lead to confusion, frustration, and slow time to remediation—leaving organizations vulnerable to data leaks, data breaches, and noncompliance.

Many organizations might first turn to regexes and open-source models to build their own data scanning solutions. However, these types of solutions have notoriously low precision, at just 6%-30%. Naturally, they might move to LLMs as a next step. However, while great at text generation, LLMs aren’t so great with named entity recognition (NER), which is a crucial aspect of identifying sensitive data.

In short, organizations need a way to achieve accurate entity recognition while also cutting down on security team workflows. The solution? Build a firewall for AI.

How does a firewall for AI work?

Think of a firewall for AI as a client wrapper that protects company and customer interactions with AI. Organizations need a firewall for AI that can stop data leaks without interrupting the flow of customer interactions. For instance, a high-latency solution will cause a disconnect in the customer experience.

In order to ensure a truly seamless customer experience, firewalls for AI need to have a P99 <100ms latency as well as a 99.9% request success rate.

Some less advanced solutions may offer a “fail fast” feature when they can’t process data as quickly as they need to. While this type of feature may seem appealing at first glance, it often indicates that an solution isn’t scalable and may lead to missed detection and lower recall rates down the line.

What should I look for in a firewall for AI?

As organizations build their own AI apps and curate their own data, it’s important to have a firewall for AI in place that offers top-notch sensitive data protection at scale.

We recommend using the following benchmarks to identify enterprise-grade AI security solutions:

  • ≥95% precision/recall
  • ≥99.9% request success rate
  • ≥1k RPS peak throughput
  • ≤100ms P99 latency for 4 or more detectors

Nightfall not only meets the above standards, but is unmatched in terms of its recall, precision, and reliability for sensitive data protection during the use of AI models.

Give Nightfall a try today by signing up for a free trial.

On this page

Nightfall Mini Logo

Getting started is easy

Install in minutes to start protecting your sensitive data.

Get a demo