Blog

Get to know the 2025 DataTribe Challenge Finalists: Q&A with Starseer

About Starseer

Starseer delivers AI Assurance and Exposure Management for security teams through an organizational AI Census, providing unmatched visibility into internal and external AI usage to distinguish approved systems from shadow AI. The platform identifies complex AI risks, including backdoors, jailbreak vulnerabilities, misaligned agents, bias, and drift, and packages them into clear, enterprise-ready actions, empowering organizations to deploy and govern AI with confidence at scale.

Q: Tell us about your background.

I’ve always loved computers. I grew up on IRC, chatting and exploring how systems work at a fundamental level. That curiosity led me through academic research and eventually into some unique professional environments.


I worked at Sandia National Labs and MITRE, which introduced me to the process of taking academic research and turning it into real-world operations. After a while, I became curious about the next step: what does commercialization actually look like? That led me to SCYTHE, a breach and attack simulation startup, where we focused on purple teaming and making advanced security concepts more automated and approachable for organizations.
Before founding a company, I wanted to experience cybersecurity at scale, so I joined Verizon to understand how large enterprises secure massive numbers of assets. That’s a monumental task in itself.

My experience is unique because I’ve seen the entire technology lifecycle, from cutting-edge research to how technology is procured, deployed, and used every day to solve real problems. I enjoy taking something academic or research-driven and finding ways to bring it to organizations in practical, sometimes unexpected ways.

Carl, my co-founder, has a similar background. He grew up breaking things and continued that passion throughout his professional career. He also worked in the National Laboratory system at Idaho National Labs, where he focused on reverse engineering unfamiliar hardware and software systems.


After his time at Idaho National Laboratory, Carl joined Cisco Talos, where he spent six and a half years doing public zero-day research. He worked across all types of systems and amassed nearly 150 publicly disclosed CVEs. While identifying vulnerabilities, he also helped build defenses, providing Cisco customers with protection against zero-days even before public disclosure.


His reverse engineering and vulnerability discovery experience directly informed our technical approach, which is built on the belief that the best defenses are informed by strong offensive capabilities.

Q: Tell us about your business or idea.

The idea came from trying to purchase a solution that didn’t exist. While working on AI security challenges, I saw the need for better telemetry and logging around AI platforms, especially when things go wrong. My first thought was: who’s building this, and how can we get involved early?


There wasn’t anyone at the time. I’d wanted to start a company, so this was a great opportunity, and it’s a fun, complex problem. There’s a huge opportunity to raise the floor for AI security across the board because so many businesses are already adopting this technology.
But it’s not just about organizations. As AI models become more personalized and tailored to individuals, we need a deeper understanding of how to trust them. With people, we rely on body language, cues, and trust signals. If I make a commitment, you can tell whether I’m following through. We don’t have that with AI yet, and building that trust layer is critical for both personal and business applications. That’s what our business idea centers on: enabling trust and visibility in AI systems.

Q: What was the original inspiration for your company or product?

Starseer emerged from my time leading the AI Red Team at Verizon. I noticed a significant disconnect between two groups. Security teams assumed they had all the telemetry and detections they needed for traditional cybersecurity, while data science and AI/ML teams lacked frameworks for understanding how adversaries might target or tamper with their models. I often found myself bridging that gap.


The more time I spent in the AI space, the more I realized people had accepted this “black box” mindset. But in cybersecurity, especially in vulnerability research and reverse engineering, dealing with unknown systems is standard practice. I thought these are exactly the skills we need to extract more insight from AI models. That’s when I reached out to Carl for his perspective. Fortunately, he agreed and became my co-founder and Starseer’s CTO.


The AI community isn’t generally familiar with reverse engineers or that ecosystem. I think this reflects a broader divide between East Coast and West Coast security cultures, and Starseer is about building that bridge.


As we continued talking to more organizations, we saw an even bigger gap between what companies actually understand about AI and what they see in the news. Trust in systems like ChatGPT, standalone models, or agents needs to be part of a broader ecosystem that connects back to familiar grounding points. That’s what Starseer delivers: a way to manage, enforce, monitor, and inspect AI systems so organizations can move from experimentation to deployment with confidence.

Q: What’s your vision for the future? What will the market you are pursuing look like in 5 to 10 years?

I think we’re moving toward increasingly personalized AI tuned to individual needs and daily tasks. We’re already seeing glimpses of this, for example, OpenAI’s recent Pulse feature, which proactively gathers information based on your chat history to help you start your day. As AI-enabled devices become more common, we’re heading toward a world where each person will have hyper-personalized models running on their phones, watches, glasses, or other wearables. That raises an important question: how do you know your model hasn’t been tampered with?


Most people think about this only in terms of adversarial threats, and that’s part of it, but there are also everyday trust issues. For example, when reading reviews before buying something, how do you know a recommendation is genuinely in your best interest and not influenced by a hidden commercial relationship?


As businesses give AI agents increasing autonomy, ensuring those systems are auditable and trustworthy becomes essential. This matters not only for large enterprises but also for individuals using AI in daily life.


We’re focused on taking academic-level AI inspection research and turning it into practical tools that work in production environments. We look beyond inputs and outputs to assess whether a model’s behavior aligns with user expectations and organizational intent.

Q: How does your business address pressing cyber and data challenges for the commercial sector?

Every organization today is trying to figure out how to use AI, even the most technologically advanced teams are still experimenting. Each pilot project introduces new cyber and data challenges. From training data to fine-tuning models for specific business needs, it’s a massive data problem.


We’re seeing organizations spin up infrastructure, deploy agents, and test models, often without consistent oversight. Security teams, in many cases, have been left out of these conversations and no longer hold the same influence they once did. That’s where Starseer helps. We bring security teams back into the discussion as enablers, not blockers. The first step is cybersecurity 101: asset management. What do you have? What exists?


Even reactive cleanup efforts can make a big difference. If an employee downloaded 20 models to test and only kept one, the rest might still be sitting around as unmonitored infrastructure. Identifying and cleaning up that footprint reduces risk.


Our AI Census feature helps organizations discover what’s in use, identify shadow AI, and guide users toward approved tools. Maybe your organization has approved ChatGPT, Claude, or Gemini. You want to funnel users there instead of blocking everything.


The next step is enforcement: blocking unapproved services or unauthorized LLM APIs, but only after establishing approved pathways so users have legitimate alternatives.


Finally, there’s assurance: validating that AI systems are production-ready. As organizations move from experimentation to operations, especially with on-premises deployments, they need to know what’s going live. That means scanning for supply chain risks, validating change control to detect tampering, and performing behavioral analysis to ensure AI systems align with business goals, like making sure a customer-facing chatbot doesn’t recommend competitors.


It’s a maturity journey: catalog assets, enforce policies, and drive assurance for key use cases. Businesses will adopt AI regardless; our mission is to help security teams manage that adoption pragmatically while reducing risk.

Q: What attracted you to the DataTribe Foundry? Why did you choose to participate in the DataTribe Challenge?

DataTribe has a strong track record of partnering with innovative cybersecurity companies, and we’re honored to be part of that legacy. Seeing the caliber of companies that have gone through the program and being able to call them peers is something we’re proud of.

The selection process is competitive and rigorous, which is exactly why we wanted to be involved. We’re taking a differentiated approach to AI security, and that requires a special type of partnership. We’ve been fortunate with our investors and advisors so far, and we see DataTribe as an ideal partner to help us refine our approach and showcase what makes us unique.

Q: What’s your long-term vision for your business?

As AI adoption accelerates, organizations need a unified platform for AI security, and that need will only grow. Think about it like websites: today, every business is expected to have one, with certain baseline security and compliance standards. AI will be the same way.


Our vision is to build a core platform that becomes an integral part of every organization’s AI adoption strategy. While there will be industry-specific variations, we aim to provide the foundational capabilities for assurance, management, and auditing.


Our focus on AI failure modes, auditing, and assurance addresses a fundamental gap that every AI-adopting organization faces.