- CISO Series Newsletter
- Posts
- Not Enough Hallucinations? Let’s Outfit Your LLM with Another LLM
Not Enough Hallucinations? Let’s Outfit Your LLM with Another LLM
CISO Series Podcast
Not Enough Hallucinations? Let’s Outfit Your LLM with Another LLM
Large language models present a problem. With the scale that they operate at, how can you do any kind of validation or monitoring with a human in the loop? So far most solutions have used another LLM to solve that problem. But is that a sustainable approach?
This week’s episode is hosted by David Spark, producer of CISO Series and Edward Contreras, senior evp and CISO, Frost Bank. Joining them is Anthony Candeias, CISO, Weight Watchers.
Listen to the full episode here.
AI agents require structured supervision, not autonomy
The cybersecurity industry's approach to agentic AI deployment needs to focus on systematic oversight rather than granting autonomous decision-making authority. "Start treating AI agents like junior team members. Train them, test them, watch their outputs, and never assume perfection," advised Peache George of ManTech Digital Transformation Consulting on LinkedIn. However, treating AI systems like interns rather than junior employees sets more appropriate expectations for output quality and required supervision levels. Effective AI implementation requires establishing context through structured learning periods, like employee onboarding processes, where systems understand organizational environments before making decisions. Quality assurance mechanisms must include comprehensive audit logging and verification processes that track not just what AI agents claim to accomplish, but confirmation of actual completed actions to ensure reliability and accountability.
Hiring for potential over credentials in cybersecurity
The challenge of identifying candidates with genuine learning capabilities extends beyond technical knowledge to problem-solving approaches and adaptability. A cybersecurity subreddit discussion highlighted the frustration: "Being new isn't the problem, but there has to be a willingness to learn. What I've seen instead is people talking a big game, then barely putting in the effort while the rest of us clean up after them." Successful hiring practices focus on candidates who demonstrate active engagement with the cybersecurity community, such as reading security blogs and researching current threats, rather than those who can merely recite textbook definitions. Interview processes should incorporate constructive feedback scenarios to assess how candidates respond to criticism and whether they can read room dynamics to understand problems rather than simply showcasing credentials. The most valuable trait involves candidates who build safety nets into their proposed solutions, demonstrating awareness that initial approaches may fail and preparing alternative paths for rapid iteration and learning.
AppSec training effectiveness depends on organizational relevance
Application security training programs often fail because they rely on generic content rather than addressing organization-specific challenges and business objectives. A cybersecurity subreddit post questioned whether AppSec training is "completely useless and only there for compliance purposes," with developers noting that training often serves merely to check boxes rather than provide actionable knowledge. Effective programs focus on relevant organizational examples, such as analyzing recent audit findings or demonstrating how specific coding practices improved time-to-market metrics, rather than presenting generic NIST or OWASP guidelines. The future of developer security education lies in real-time feedback within integrated development environments, providing instant coaching on secure coding practices as developers write code, eliminating the delay between mistakes and learning opportunities. Training content should address developers' pain points, such as streamlining peer review processes, rather than adding additional compliance burdens to their workflows.
AI oversight requires purpose-built models, not general solutions
The challenge of managing AI systems effectively involves deploying specialized oversight tools rather than stacking general-purpose large language models to monitor each other. Simon Willison, creator of DataSette, argued that "Don't use AI to solve your AI problem," advocating for Google DeepMind's CaMeL debugging tool that provides human-readable visibility into LLM-API interactions without additional machine learning layers. However, AI-based oversight can be effective when using purpose-built models trained for specific security functions, such as SOC operations, application security, or red team activities, rather than broad general-purpose systems attempting to evaluate each other. The key lies in understanding AI systems' roles within technology stacks, whether they're making determinations, conducting quality checks, or automating repetitive processes, and implementing appropriate oversight mechanisms. Organizations should treat large language models as sophisticated data catalogs requiring the same architectural discipline applied to traditional data warehouses. This should include proper logging, boundaries, and human-readable insights rather than attempting to solve complexity with additional complexity.
Listen to the full episode on our blog or your favorite podcast app, where you can read the entire transcript. If you haven’t subscribed to the CISO Series Podcast via your favorite podcast app, please do so now.
Thanks to Einat Segal at iProov for contributing this week’s “What’s Worse?!” scenario.
Huge thanks to our sponsor, Vanta
Subscribe
Subscribe to CISO Series Podcast
Please subscribe via Apple Podcasts, Spotify, YouTube Music, Amazon Music, Pocket Casts, RSS, or just type "CISO Series Podcast" into your favorite podcast app.
Security You Should Know
Embracing AI-Native DLP with Orion Security
DLP can be a bit of a four-letter word in cybersecurity. False positives are a major problem with any traditional DLP solution because setting the right policy for your organization’s needs is always a moving target.
In this episode, Nitay Milner, co-founder and CEO of Orion Security, explains how they provide a “zero-policy” approach to DLP that brings in the missing piece of context to the category. Joining him are Steve Knight, former CISO at Hyundai Capital America, and Jack Kufahl, CISO at Michigan Medicine.
Listen to the full episode here.
Thanks to our sponsor, Orion Security
Subscribe
Subscribe to Security You Should Know
Please subscribe via Apple Podcasts, Spotify, Amazon Music, Pocket Casts, RSS, or just type "Security You Should Know" into your favorite podcast app.
What I love about cybersecurity…
“The lifelong chess match. Threats are evolving, which requires the defense to evolve. This is a continuous cycle with no end in sight, and it makes cybersecurity so much fun to work in. What we did 10 years ago isn't what we need to do today.“ - Anthony Candeias, CISO, WeightWatchers
Listen to the full episode of “Not Enough Hallucinations? Let’s Outfit Your LLM with Another LLM”
What’s the Most Efficient Way to Rate Third Party Vendors?
"Security ratings are not gospel, but they are useful trend indicators—like your credit score. They don’t tell the whole story, but they give you a signal that’s worth investigating, which leads to more direct questions." - Steve Knight, former CISO, Hyundai Capital America
Listen to the full episode of “What’s the Most Efficient Way to Rate Third Party Vendors?”
CISO Series Newsletter - Twice every week
Cyber Security Headlines Newsletter - Every weekday
Security You Should Know Newsletter - Weekly
The End of "Seeing is Believing" - Executive Deepfake Protection with BlackCloak
Think deepfakes are a future problem? New research from Ponemon Institute reveals that 42% of executives have already been targeted by deepfake voice, video, or picture scams. In this interview with Dr. Chris Pierson, CEO of BlackCloak, we dive deep into how cybercriminals are weaponizing AI to bypass traditional security measures.
Watch the video here.
Huge thanks to our sponsor, BlackCloak
LIVE!
Cyber Security Headlines - Week in Review
Make sure you register on YouTube to join the LIVE "Week In Review" this Friday for Cyber Security Headlines with CISO Series reporter Richard Stroffolino. We do it this and every Friday at 3:30 PM ET/12:30 PM PT for a short 20-minute discussion of the week's cyber news. Our guest will be Jim Bowie, vp, CISO, Tampa General Hospital.
Thanks to our sponsor, Vanta
Super Cyber Friday!
Join us this Friday, July 11, for “Hacking the Resilience Mindset”
Join us on Friday, July 11, 2025, for Super Cyber Friday: “Hacking the Resilience Mindset.”
It all kicks off at 1 PM ET / 10 AM PT, when David Spark will be joined by Liz Morton, field CISO, Axonius, and Nick Vigier, CISO, Oscar Health, for an hour of insightful conversation and engaging games. And at 2 PM ET / 11 AM PT, stick around for our always-popular meetup. This time, it will be hosted right inside the event platform.
Remember to add it to your calendar via LinkedIn or on Airmeet link in the invite.
Thanks to our Super Cyber Friday sponsor, Axonius
Thank you!
Thank you for supporting CISO Series and all our programming
We love all kinds of support: listening, watching, contributions, What's Worse?! scenarios, telling your friends, sharing in social media, and most of all we love our sponsors!
Everything is available at cisoseries.com.
Interested in sponsorship, contact me, David Spark.