Everyone’s racing to build copilots right now. But making an agentic AI that feels like a trusted teammate—one that understands context, acts safely, and simplifies complex workflows—is harder than it looks.
While building Nyx, our agentic AI copilot for security teams, our team spent a lot of time thinking about how to make her an effective team member - skilled and trustworthy. These are the 8 principles that guided us—and we’re sharing them here in the hope they spark ideas and help you build better copilots too.
MCP unlocks magic
The emerging Model Context Protocol (MCP) is a game changer. What would you do if you could have a conversation with your APIs and data? This is powerful stuff!
Nyx can dynamically query application data and return structured, actionable results—no hardcoded workflows required.
Ex: “Show me all events where users shared intellectual property externally.”
Takeaway: Think carefully about what your AI needs to know and what actions it can safely take on behalf of the user.
Only Feed It What Matters
We initially considered exposing almost every non-sensitive field from our backend API—mulitple timestamps, UUIDs, machine IDs, raw risk scores. The result? Slow, noisy, cluttered conversations.
We refined it to just the essentials:
High-signal fields: policy name, user info, domain, data types found, filenames exfiltrated, risk level (not raw score).
Dropped noise: redundant timestamps, duplicate status flags, non-human-readable IDs like UUIDs.
Simplified details: risk scores converted to clear levels (low/medium/high).
Takeaway: Only give your copilot the data needed for decision-making. The less noise, the faster and clearer the experience.
Trust Starts with Privacy by Design
When an agentic AI interacts with sensitive environments, trust is non-negotiable. For Nyx:
- All Nightfall detection models are hosted and operated entirely within our secure AWS environment.
- Customer data is never used to train, label, or evaluate LLMs.
- Nyx only surfaces metadata and counts, never sensitive payloads.
Takeaway: Even outside security, you need similar guardrails. Build trust into your design, or users won’t delegate real actions to your AI.
Define Personality: Your Ideal Teammate
A copilot isn’t just software—it’s a coworker. We wrote a personality spec for Nyx before any prompt engineering:
- Proactive but not overwhelming
- Concise and action-oriented
- Always confirm destructive actions (e.g., bulk actions, policy changes)
- Clear, numbered menus for easy reference
Takeaway: Define how your AI should feel to work with. It shapes everything from prompts to UI.
Format Like a Human, Not a Bot
Early on, Nyx’s responses were walls of text that scrolled off the screen. Unusable. We learned fast that readability wasn’t optional:
- Short, scannable answers
- Bullet points and menus for structure
- Optimized formatting to fit more meaning in less space
- Numbers and counts so users can orient quickly
Takeaway: Good formatting builds trust. If users can’t scan it, they won’t use it.
Connect the Conversation to What’s on Screen
If you already have a UI, your copilot should bridge the gap between chat and the interface. Users shouldn’t have to mentally map a conversation back to lists or dashboards.
For example:
- After Nyx summarizes exfiltration event patterns, she offers direct navigation links to those events, labeled by pattern and count.
- “View Brian’s Events (13)” is far more helpful than a generic “View user events.”
- Action buttons mirror the data currently on screen, making it easy to take the next step without searching.
Takeaway: Tie the conversation to visible context. It reduces friction and builds confidence in the AI’s understanding.
Lean on the LLM, Keep the UI Flexible
Don’t hardcode workflows. This isn’t a 2000s chatbot—you’ll never predict every question users will ask or every action they’ll want to take.
- Let the LLM handle natural questions directly. Users shouldn’t have to memorize complex search operators or command syntax.
- Dynamic buttons let users act immediately, labeled with live context (e.g., “View API Key Leaks (7)”).
Takeaway: Free-form conversation + structured controls = a copilot that adapts to users without locking them into rigid flows.
Always Add Value, Keep the Conversation Moving
The worst thing a copilot can do? Answer and stop. Nyx always suggests next steps:
“Would you like to ignore these events, block uploads to this site, and/or notify the user?”
Always ending with: “What would you like to do next?”
Takeaway: A copilot should drive outcomes, not just provide information.
Final Thoughts
Nyx is still evolving, and every week we learn more about what makes a great security copilot. The biggest takeaway? It’s not just about plugging an LLM into your data. You need to think about:
- Feeding it only the data that matters
- Building trust and privacy by design
- Defining a clear personality and communication style
- Connecting tightly to UI and actions
- Guiding users safely to resolution
Whether you’re building for security, sales, or design teams, get these right and you’ll deliver a copilot that feels like a trusted teammate, not just another AI feature.
Contact us to get a personalized demo of Nyx.