- CISO Series Newsletter
- Posts
- Join us tomorrow for "Hacking Past Mistakes"
Join us tomorrow for "Hacking Past Mistakes"
Join us TOMORROW, Friday [01-23-26], for "Hacking Past Mistakes"
Join us Friday, January 23, 2026, for “Hacking Past Mistakes: An hour of critical thinking about what we can do better in 2026.”
It all begins at 1 PM ET/10 AM PT TOMORROW with guests Tom Hollingsworth, organizer, Tech Field Day, and Nick Espinosa, host, The Deep Dive Radio Show. We'll have fun conversation and games, plus at the end of the hour (2 PM ET/11 AM PT) we'll do our meetup.
Defense in Depth
How Best to Prepare Your Data for Your Tools
LLMs are really good at generating output that looks acceptable. That doesn't mean it's correct. How do we determine whether our data handling within the LLM is both proper and sufficient to trust the output?
Check out this post for the discussion that is the basis of our conversation on this week’s episode, co-hosted by David Spark, the producer of CISO Series, and Geoff Belknap. Joining them is sponsored guest Matt Goodrich, director of information security, Alteryx.
Listen to the full episode here.
The integrity challenge
AI in cybersecurity isn't primarily a confidentiality problem; it's about trust and validation. Nrupak Shah of Coles pointed to the fundamental question: "What defines trust? What can be trusted and what cannot? An LLM will have to present context with the data, and the reasoning why that data was thought to be relevant to the question." Ravi Char of NXP Semiconductors emphasized that even with clean data and proper hygiene, the real work falls to humans, saying, "Bruce Schneier has pointed out that while AI poses confidentiality and classic security challenges, it is much more an integrity problem. Even with strong input, output, and model hygiene, it still takes human judgment to validate results, catch inconsistencies, and resolve the trust and integrity issues that AI cannot detect on its own." For critical outputs with downstream dependencies, he added, experienced human oversight "is the safeguard that prevents 'confidently correct but in reality incorrect' AI output from turning into undesirable outcomes. AIs don't face consequences, but we humans do."
Zero trust for AI outputs
The nature of current LLMs means validation can't be optional. Louis Zhichao Zhang of AIA Australia laid out his approach: "AI is non-deterministic, so hallucination is inevitable, even with flawless input data. My guiding principle is simple: zero trust, always validate." He described AI as "like a fresh graduate intern—quick, smart, and broadly knowledgeable, but with limited context and memory, and prone to mistakes. Treat AI as a valuable thought partner and an extra pair of hands that boosts productivity, but never the 'decision maker' in the loop." Adrienne Young of Team Carney, Inc. pointed to a practical reality that tempers AI enthusiasm, saying, "I think that it takes quite a bit of time to audit the LLM to ensure that it is in fact 'right' and it can get very murky. Sometimes it's just easier doing it myself and not having to second guess. I don't want to be 'that guy' who can't answer where the conclusion shown on a PowerPoint slide comes from."
Guardrails over garbage
The conversation around AI data quality often fixates on input hygiene, but that misses the point. "We'll never have perfectly sanitized security data, and that's not the point," said Khash Kiani. "The real challenge is designing AI that can reason over imperfect data with the governance and guardrails enterprises actually need. That's how you earn trust. 'Garbage in' is not the main risk. 'Unguarded out' is." Abilash Prabhakaran of Grain Management echoed this focus on bounded execution: "I'm all about guarded outputs. With the goal of disproving exploitability, my AI runs freely within bounds to verify breach paths in cloud environments. Way more accurate than chasing exploit proofs."
It looks good...
Polished output can mask unreliable reasoning, creating a dangerous dynamic when critical thinking atrophies. This is a real concern with how people interact with AI-generated content. "Something to consider here is whether people are primed just to treat what comes out of LLMs as great because it looks polished. These kinds of tools have not done the collective 'us' any favors in terms of encouraging and refining critical thinking and deep investigative skills. So while I think there's huge potential for LLMs turning messy data into useful insights, the more important question is 'how do we know it's right?'" asked Robert Wood of Sidekick Security.
Please listen to the full episode on your favorite podcast app, or over on our blog, where you can read the full transcript. If you’re not already subscribed to the Defense in Depth podcast, please go ahead and subscribe now.
Huge thanks to our sponsor, Alteryx
Subscribe
Subscribe to Defense in Depth podcast
Please subscribe via Apple Podcasts, Spotify, YouTube Music, Amazon Music, Pocket Casts, RSS, or just type "Defense in Depth" into your favorite podcast app.
When Checklists Aren't Enough: Moving Beyond Compliance Theater
We assembled a panel of CISOs and security professionals to talk about a transformation many organizations struggle with: moving from a compliance-driven security program to a risk-based one. The panel shared what worked, what failed, and how to align security with real business risk, not just checklists and audits.
You can read all of the Q&As straight from the source here, but we've distilled some key takeaways for you from the AMA. Read the full article here.
Thanks to our participants!
David Cross, (u/MrPKI), CISO, Atlassian
Kendra Cooley, (u/infoseccouple_Kendra), senior director of information security and IT, Doppel
Simon Goldsmith, (u/keepabluehead), CISO, OVO
Tony Martin-Vegue, (u/xargsplease), executive fellow, Cyentia Institute
Next up: “I had my budget cut and still reduced risk. Ask Me Anything”
Starting Sunday, January 25 on r/cybersecurity.
LIVE!
Cybersecurity Headlines - Department of Know
Our LIVE stream of The Department of Know happens every Monday at 4 PM ET / 1 PM PT. This week features CISO Series reporter and guest host, Sarah Lane, and a panel of security pros. Each week, we bring you the cybersecurity stories that actually matter, and the conversations you’ll be having at work all week long.
Monday’s episode featured Dmitriy Sokolovskiy, senior vice president, information security, Semrush, and Nick Espinosa, host, The Deep Dive Radio Show. Missed it? Watch the replay on YouTube and catch up on what’s shaping the week in security.
Join us again next week, and every Monday.
Thanks to our Cybersecurity Headlines sponsor, Dropzone AI
Cyber chatter from around the web...
Jump in on these conversations
“How screwed are we?” (More here)
“Instagram denies breach amid claims of 17 million account data leak” (More here)
“What's with all the 100% on-site roles I see in the US?” (More here)
Coming up in the weeks ahead on Super Cyber Friday:
[01-23-26] “Hacking Past Mistakes”
[01-30-26] “Hacking Employee Retention”
Save your spot and register for them all now!
Thank you for supporting CISO Series and all our programming
We love all kinds of support: listening, watching, contributions, What's Worse?! scenarios, telling your friends, sharing in social media, and most of all we love our sponsors!
Everything is available at cisoseries.com.
Interested in sponsorship, contact me, David Spark.





