- CISO Series Newsletter
- Posts
- [09-26-24]--Join us tomorrow for “Hacking Alerts”
[09-26-24]--Join us tomorrow for “Hacking Alerts”
Super Cyber Fridays!
Join us TOMORROW, Friday [09-27-24], for "Hacking Alerts"
Join us Friday, September 27, 2024, for “Hacking Alerts: An hour of critical thinking about triaging the deluge hitting your SOC.”
It all begins at 1 PM ET/10 AM PT on Friday, September 27, 2024, with guests Itai Tevet, CEO, Intezer and Russ Ayres, deputy CISO & head of cyber, Equifax. We'll have fun conversation and games, plus at the end of the hour (2 PM ET/11 AM PT) we'll do our meetup.
Thanks to our Super Cyber Friday sponsor, Intezer
Defense in Depth
Is It Possible to Inject Integrity Into AI?
With generative AI systems we’re concerned about the quality and reliability of the output. But do we risk losing sight of the overall integrity of these systems when we only focus on outputs?
Check out this post for the discussion that is the basis of our conversation on this week’s episode co-hosted by me, David Spark, the producer of CISO Series, and Geoff Belknap. Joining us is Davi Ottenheimer, vp, trust and digital ethics, Inrupt.
Sir Tim Berners-Lee co-founded Inrupt to provide enterprise-grade software and services for the Solid Protocol.
LLMs lack integrity controls
Integrity controls aren’t just an ethical issue or nicety for devs to dig into. Organizations are effectively flying blind without them. "Implementing integrity controls in AI is akin to equipping it with a dependable compass and map. These controls guide AI through the intricate landscape of data, ensuring ethical considerations are met and trustworthy outcomes are achieved. Without these controls, AI operates blindly, increasing the risk of biases, mistrust, and inevitable disorder," said Lars Paul Hansen of Danske Bank. As Edward Contreras, CISO of Frost Bank reminded us, integrity is the bedrock of confidence in these models, saying, "Similar to development, peer reviews are critical to validation. I can see a similar control for AI where automated peer reviews bumped up against a security LLM can provide ‘higher credibility’ in the output. The goal of AI should be greater efficiency and confidence in the output."
A valid criticism
One of the things holding up wider LLM adoption in the enterprise is a lack of controls as you get closer to the model and training data. For Steve Zalewski, co-host of Defense in Depth, this validity concern is the focus, saying, "I would argue that the problem is not one of ‘integrity’, rather it is ‘curated validity’ of the data you are letting it ingest. You have to have some governance in place to prevent egregious abuse of the learning models. So while integrity is important, curated validity means you are applying the appropriate business filters to obtain output that is more likely to meet the expectations of the business." While validity and integrity are critical for business apps, LLMs will be used in a wide variety of use cases. "Integrity is only the central problem in an AI system where the output is expected to conform to some idea of ‘true’ or ‘right’ but not all AI systems need or want to be right. Some applications will need these controls, but a key risk of believing that they are somehow integral to the development of AI systems is that we stifle the development of the systems themselves," said Aaron Stanley of dbt Labs.
Doubts in self-policing AI
Makers of LLMs are quick to say they have established AI guardrails and integrity protocols. However, these largely are based on using LLMs to examine the LLMs. The problems with that seem obvious to Jared Mendenhall, CISO of Impossible Foods, saying, "While improving data quality, establishing ethical guidelines, and citing sources are crucial for enhancing AI output, AI systems can still hallucinate. We lack control over much of the data in use, and what queries were used to generate the output. Programming the AI to manage its own integrity control feels like a ‘fox guarding the henhouse’ situation. Third-party validation is essential for a true integrity check." The ease of accessing AI tools paired with their transformation capabilities presents a rare opportunity for both good and bad. "AI will help exponentially distribute misinformation without proper controls for garbage answers and validation. We are in the very early stages of this journey. It is going to be a wild ride and an interesting one. AI is one of those tools that has the capacity for achieving great and terrible things at the same time," said James Bowie, CISO of Tampa General Hospital.
New tech, familiar problems
The hype around AI makes it easy to forget that we’ve adapted to new technology before. We just need the will to apply them. "There is merit to the concept of a set of quality measures that an individual or enterprise can use to factor into access or usage controls. Similar to peer reviews of research papers. Multiple factors could be produced with various (open) formulas and then leveraged by other systems in selection processes,” said Phillip Miller, CISO of Qurple. For Nir Rothenberg, CISO of Rapyd, the solution is even simpler. Organizations and consumers will flock to the most reliable models, as he outlined saying, "We've been dealing with ‘garbage in, garbage out’ for decades in computing. AI is just a more complex version of the same problem. The market will self-correct. Companies that build trustworthy AI will thrive, those that don't will fail. No need for endless academic debates, let the market sort it out."
Please listen to the full episode on your favorite podcast app, or over on our blog where you can read the full transcript. If you’re not already subscribed to the Defense in Depth podcast, please go ahead and subscribe now.
Thanks to our podcast sponsor, Concentric AI
Subscribe
Subscribe to Defense in Depth podcast
Please subscribe via Apple Podcasts, Spotify, YouTube Music, Amazon Music, Pocket Casts, RSS, or just type "Defense in Depth" into your favorite podcast app.
LIVE!
Cyber Security Headlines - Week in Review
Make sure you register on YouTube to join the LIVE "Week In Review" this Friday for Cyber Security Headlines with CISO Series reporter Richard Stroffolino. We do it this and every Friday at 3:30 PM ET/12:30 PM PT for a short 20-minute discussion of the week's cyber news. Our guest will be Jason Elrod, CISO, MultiCare Health System.
Thanks to our Cyber Security Headlines sponsor, Vanta
LIVE!
CISO Series Podcast LIVE in La Jolla (10-30-24)
The CISO Series Podcast is celebrating spooky season the only way we know how, with another live podcast recording!
We're recording a podcast episode at the Planet Cyber Sec CISO-CIO Forum. Joining me on stage for the recording will be Gary Hayslip, CISO, Softbank Investment Advisers, and Keith McCartney, vp, security and IT, DNAnexus. Here's everything you need to know:
WHERE: La Jolla, California
WHEN: October 30, 2024. The event runs from 9:00 am to 6:00 pm, but we'll be recording at 5:00 PM.
This event is invitation only for qualified CISOs, directors of information security, CIOs, and their deputies. Apply and register to attend HERE.
Thanks to our sponsor, Entro
Cyber chatter from around the web...
Jump in on these conversations
"So, about the exploding pagers." (More here)
"What does a job in Cybersecurity actually imply? walk me through a normal day at the office" (More here)
"Are consumers actually affected by data breaches?" (More here)
Coming Up On Super Cyber Friday...
Coming up in the weeks ahead on Super Cyber Friday we have:
[09-27-24] Hacking Alerts
[10-04-24] Hacking Job Stagnation
Save your spot and register for them all now!
Thank you!
Thank you for supporting CISO Series and all our programming
We love all kinds of support: listening, watching, contributions, What's Worse?! scenarios, telling your friends, sharing in social media, and most of all we love our sponsors!
Everything is available at cisoseries.com.
Interested in sponsorship, contact me, David Spark.