Blog

Building your own AI app? Here are 3 risks you need to know about—and how to mitigate them.

Author icon
by
Brian Hutchins
,
May 14, 2024
Building your own AI app? Here are 3 risks you need to know about—and how to mitigate them. Building your own AI app? Here are 3 risks you need to know about—and how to mitigate them.
Brian Hutchins
May 14, 2024
Icon - Time needed to read this article

After the debut of ChatGPT, and the ensuing popularity of AI, many organizations are leveraging large language models (LLMs) to develop new AI-powered apps. Amidst this exciting wave of innovation, it’s essential for security teams, product managers, and developers to ensure that sensitive data doesn’t make its way into these apps during the model-building phase.

What are the risks of AI model building?

OWASP has identified sensitive data exposure as one of the most prominent risks for building AI apps. If sensitive data is inadvertently learned by the AI model, then the host organization might face one of the following issues:

  • Legal issues and fines due to noncompliance with privacy standards like GDPR and CCA
  • Legal issues associated with IP theft or malicious actions
  • Reputational damage and, ultimately, the erosion of customer trust

In an effort to avoid these issues, organizations need to evaluate their data at every stage of the model-building pipeline. For starters, organizations can do this by scrubbing sensitive data (such as PHI, IP, or API keys) when gathering and using data for:

  • Training
  • Fine-tuning
  • RAG (retrieval-augmented generation)
  • Annotation
Learn more about how to secure your AI.

What’s so difficult about detecting sensitive data?

At the enterprise scale, it can be difficult to scan AI inputs, training data, and outputs without a huge volume of false positive alerts. With this in mind, it’s important to deploy a scanning tool with high recall and precision. Think of these two metrics in the context of a “needle in the haystack” problem:

  • Recall: “There are X number of needles in the haystack. How many of them did you find?”
  • Precision: “How many of the detected needles are actually needles?”

Using a data scanning tool with high recall and precision, security teams can discover granular instances of sensitive data, and can remediate them more quickly since they’re less inundated with false positive alerts. On the flip side, low recall and precision lead to confusion, frustration, and slow time to remediation—leaving organizations vulnerable to data leaks and breaches.

Organizations might first turn to regexes and open-source models to build their own data-scanning solutions. However, these types of solutions have notoriously low precision, at just 6%-30%. Naturally, they might move to LLMs as a next step. However, while great at text generation, LLMs aren’t so great with named entity recognition (NER), which is a crucial aspect of identifying sensitive data.

In short, organizations need a way to achieve accurate entity recognition while also cutting down on false alarms. The solution? Build a firewall for AI.

How does a firewall for AI work?

Think of a firewall for AI as a client wrapper that protects company and customer interactions with AI. Organizations need a solution that can stop data leaks without interrupting the data science workflow and model updates. Even though data processing happens offline and doesn’t directly impact employee or end-user experience, it’s still important to have a system in place that can handle large datasets and ensure that model updates happen in a timely manner.

In order to ensure a truly seamless user experience, firewalls for AI need to have a P99 <100ms latency as well as a 99.9% request success rate.

Some solutions offer a “fail fast” feature to eliminate data flow bottlenecks. While this feature may seem appealing at first glance, it frequently indicates that a solution is immature and not able to perform at scale, leading to missed detections and lower recall rates.

What should I look for in a firewall for AI?

As organizations build their own AI apps and curate their own data, it’s important to have a firewall for AI in place. We recommend using the following benchmarks to identify enterprise-grade solutions:

  • ≥95% precision/recall
  • ≥99.9% request success rate
  • ≥1k RPS peak throughput
  • ≤100ms P99 latency, ideally for at least 4 or more detectors

Nightfall not only meets the above standards, but is unmatched in terms of its recall, precision, and reliability for sensitive data protection during AI model building.

Give Nightfall a try today by signing up for a free trial.

On this page

Nightfall Mini Logo

Getting started is easy

Install in minutes to start protecting your sensitive data.

Get a demo