[11-30-23]--Join us tomorrow for “Hacking Trust Management”

Join us tomorrow for “Hacking Trust Management”

CISO Series

Super Cyber Fridays!

Join us TOMORROW, Friday [12-01-23], for "Hacking Trust Management"

Hacking Trust Management

Join us Friday, December 1, 2023, for “Hacking Trust Management: An hour of critical thinking on how to prove you’re the company others want to work with.”

It all begins at 1 PM ET/10 AM PT on Friday, December 1, 2023 with guests Matt Cooper, senior manager, privacy risk and compliance, Vanta and Janet Heins, CISSP, CISO, ChenMed. We'll have fun conversation and games, plus at the end of the hour (2 PM ET/11 AM PT) we'll do our meetup.

Thanks to our Super Cyber Friday sponsor, Vanta

Vanta

Defense in Depth

Mitigating Generative AI Risks

Mitigating Generative AI Risks

As with any new technology, generative AI comes with a set of risks. So how can we address these risks to take advantage of its benefits?

Check out this post for the discussion that is the basis of our conversation on this week’s episode co-hosted by me, David Spark, the producer of CISO Series, and Geoff Belknap, CISO, LinkedIn. Joining us is our guest, Jerich Beason, CISO, WM.Jerich has just launched a LinkedIn learning course on securing Generative AI and it’s now available. Check out this promotional video.

Staying ahead with awareness and ethics

In the world of AI and LLMs (large language models), being proactive is critical. "The first step is to understand how LLMs are already being used and prioritize risks that are critical for your use cases," said Sandesh Mysore Anand from Razorpay. While understanding current uses is crucial, this can’t happen without also considering ethical guardrails. Drawing inspiration from Isaac Asimov's 3 Laws of Robotics, Adam Dennis from AntiguaRecon suggests a “novel” approach, "We should consider creating an adversarial AI model to enforce a simple set of moral/ethical standards for AI." This twofold strategy—awareness of existing usage and the introduction of ethical safeguards—is the way to create a robust framework for navigating the AI landscape responsibly.

The need for verification

With everyone trying these new tools out, how do you address authenticity and accountability? "I have CISOs reaching out to me each week with concerns because their devs are using these tools but are completely blindsided by their inability to locate the original author within these code snippets,” cautioned Kristen Bianchi of Threatrix. On the verification front, Yuri Soldatenkov from Kemper Development Company points out the need for "AI Resistant" verification mechanisms similar to phishing resistant MFA today. 

New challenges deserve new frameworks

There’s a lot of fear as to what these LLMs could release. "These LLMs are basically spitting out the information we feed it with, in a matter of time proprietary information will be fed to the public," cautions Ogaga Umukoro from Newtopia. To provide a structured approach towards managing these concerns, Eric Silberman from USDA suggested looking at resources like the AI Risk Management Framework by NIST. 

Handling this double-edged sword

"Generative AI can be a double-edged sword but by proactively leaning into security and compliance, we can ensure the longevity of this breakthrough technology," said Varun Grover from Veritas Technologies. Organizations need to embrace the proactive tools this technology can enable. "Consider an AI tool that monitors our communications in real-time, intercepting vishing or social engineering attempts on employees. It's akin to the spam and phishing email detectors we use now but applied to live voice conversations," said Chad B. from United Patriot Coin. 

Please listen to the full episode on your favorite podcast app, or over on our blog where you can read the full transcript. If you’re not already subscribed to the Defense in Depth podcast, please go ahead and subscribe now.

Thanks to our podcast sponsor, SpyCloud

SpyCloud

LIVE!

 Cyber Security Headlines - Week in Review 

Make sure you register on YouTube to join the LIVE "Week In Review" this Friday for Cyber Security Headlines with CISO Series reporter Richard Stroffolino. We do it this and every Friday at 3:30 PM ET/12:30 PM PT for a short 20-minute discussion of the week's cyber news. Our guest will be Christina Shannon, CIO, KIK Consumer Products.

Thanks to this week's headlines sponsor, SpyCloud

SpyCloud

Cyber chatter from around the web...

Jump in on these conversations 

"How would you inspire/re-inspire SOC staff to grow their practical knowledge?" (More here)"Success stories on reporting cyber attacks to law enforcement?" (More here)"What makes a good cybersecurity talk at a conference for you" (More here)

Coming Up On Super Cyber Friday...

Coming up in the weeks ahead on Super Cyber Friday we have:

  • [12-01-23] Hacking Trust Management

  • [12-08-23] Hacking Cyber Resilience

  • [12-15-23] Hacking the SaaS Security Journey

Save your spot and register for them all now!

Ask Me Anything!

AMA: I’m a security professional leading a 1-3 person security team, Ask Me Anything...

Supporting hundreds if not thousands of people with a small security staff seems to be a daunting task, but these security professionals have done it (or are currently doing it). They’re all

of pulling it off, dealing with the stress, and managing growth pains.

This AMA will run all week from 11-26-23 to 12-02-23.

Thank you!

Thank you for supporting CISO Series and all our programming  

We love all kinds of support: listening, watching, contributions, What's Worse?! scenarios, telling your friends, sharing in social media, and most of all we love our sponsors!

Everything is available at cisoseries.com.

Interested in sponsorship, contact me, David Spark.