Blog

GenAI is Everywhere. Now is the Time to Build a Strong Culture of Security.

Author icon
by
Isaac Madan
,
August 4, 2023
GenAI is Everywhere. Now is the Time to Build a Strong Culture of Security. GenAI is Everywhere. Now is the Time to Build a Strong Culture of Security.
Isaac Madan
August 4, 2023
Icon - Time needed to read this article

Updated: 5/21/24

Since Nightfall’s inception in 2018, we’ve made it our mission to equip companies with the tools that they need to encourage safe employee innovation. Today, we’re happy to announce that we’ve expanded Nightfall’s capabilities to protect sensitive data across generative AI (GenAI) tools and the cloud. Our latest product suite, Nightfall for GenAI, consists of three products: Nightfall for ChatGPT, Nightfall for Sensitive Data Protection, and Nightfall’s Firewall for AI. Read on to discover why we invested our time in creating these cutting-edge products.

What’s the impact of GenAI?

It’s a colossal understatement to say that GenAI has changed the way we work. From writing customer service emails to de-bugging code and more, GenAI has become an essential tool across enterprise companies in virtually every field. Popular tools like ChatGPT and GitHub Copilot are just two of the ways in which employees are leveraging GenAI to improve their productivity, boost their creativity, and enhance their content—and that’s just scratching the surface.

Experts warn that if companies refuse to adapt to GenAI, they’ll not only forego these compelling benefits, but also dull their competitive edge. Even so, a number of leading tech and finance companies like Apple, Amazon, JP Morgan Chase, and Citigroup have been forced to limit or outright ban employee use of GenAI tools due to looming data privacy concerns. As security leaders see it, there are three main threat vectors to consider when it comes to GenAI. Let’s take a closer look at each of them.

Direct user input

Imagine this: A software engineer asks ChatGPT to generate new code. While this is innocuous enough, it might become a problem down the line if that engineer includes source code in their prompt. The risk of submitting this prompt is twofold; firstly, the prompt might be used to used to train OpenAI’s public model, and secondly, the source code might later be accessed by threat actors via a reconstruction attack.

Third-party exposure

While direct data input might be the most immediate security concern that comes to mind, SaaS apps present a more subtle, yet arguably more pervasive threat. From Asana to Notion to Atlassian, a number of top apps have started using third-party sub-processors to offer AI-powered features. So how would a data leak scenario play out? Say a project manager asks Atlassian Intelligence to summarize a Jira ticket that contains an active API key. Theoretically, this API key could be leaked to OpenAI’s servers and result in the same outcomes as outlined above.

Developer oversight

Countless companies are likely in the process of training their own proprietary AI tools using customer data. However, if a developer doesn’t take the proper precautions to filter sensitive data out at every stage of the software development lifecycle, they might accidentally allow some to squeak through into training data. This poses a direct risk to compliance with leading data protection frameworks like PCI-DSS, HIPAA, and more.

What’s the impact of Nightfall for GenAI?

We developed Nightfall for GenAI in order to help companies unlock the benefits of GenAI while addressing the aforementioned threats head on. Whether our customers are prompting ChatGPT, leveraging SaaS apps’ AI capabilities, or building their own custom tools, they’ll be able to innovate with confidence knowing that their data is secure.

However, at Nightfall we believe that no DLP strategy is complete without actionable steps for building a strong culture of cybersecurity. With this in mind, we’d like to highlight three key ways that Nightfall for GenAI puts this philosophy into practice.

Increasing visibility into the cloud

Traditional DLP methods like network DLP and endpoint DLP present a number of challenges, especially in regard to visibility. Respectively, these methods only cover in-network or on-premises devices, meaning they have blind spots when it comes to GenAI tools and cloud apps. When security teams don’t have the visibility to protect certain apps, they’re often forced to take broad-stroke actions like banning the use of a platform or rolling out a proxy. These actions are not only labor intensive, but also inherently pit security teams and employees against each other.

In short? Having limited visibility is a slippery slope. But with Nightfall, users are able to see into the cloud via our seamless API integrations and browser plugin. The resulting visibility gives Nightfall users the flexibility to take a more nimble approach to DLP, such as by remediating precise instances of sensitive data leakage in the cloud without blocking app functionality or affecting employee performance. Furthermore, Nightfall’s intuitive console also allows security teams to gain insights about the most common types of violations, which can inform employee education moving forward.

Creating frictionless employee experiences with self-remediation

Nightfall believes that blocking employees is one of the fastest ways to impede the growth of a strong security culture. If companies choose to block ChatGPT or other useful cloud-based tools, employees might start to resent how security policies impact their workflow, and simply look for Shadow IT workarounds instead. These workarounds aren’t just unsafe—they also lead to employees being less invested in their company’s security culture.

In line with this, one of Nightfall’s driving goals is to impact employee workflows as little as possible, while still educating them about security best practices. This philosophy manifests in one of our latest features: Employee self-remediation. While using Nightfall for ChatGPT, employees can choose to self-remediate sensitive data out of prompts before they click submit. As a result? Employees aren’t blocked from using ChatGPT, and can instead feel empowered to take their company’s security into their own hands.

Streamlining security team workflows

An added bonus of employee self-remediation? There’s one less task for security teams to worry about. This is just one example of how Nightfall allows security teams to enhance operations while lowering their total cost of ownership. Security teams who use Nightfall for Sensitive Data Protection and Nightfall’s Firewall for AI can also benefit from context-rich alerts and instant remediation actions, all of which are accessible in the intuitive Nightfall console. By consolidating DLP to a single pane of glass, Nightfall frees up security teams so that they can shift their focus to educating employees and strategizing new ways to keep company data safe.

At Nightfall, we like to think of security as a team sport. Our overarching aim is to help security teams to protect company data as efficiently as possible, while also equipping employees with the tools they need to make informed security decisions.

What’s next for Nightfall?

As an AI company ourselves, the Nightfall team has worked tirelessly to develop the AI models that power our best-in-class detection and compliance engines. Moving forward, our vision is to scale our handful of integrations to eventually cover hundreds, if not thousands, of cloud-based apps. To that end, we’re excited to continue building out our API-driven offerings so that we can deliver an efficient DLP solution to any user, no matter where they are in the cloud.

On this page

Nightfall Mini Logo

Getting started is easy

Install in minutes to start protecting your sensitive data.

Get a demo