<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/stylesheet.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link rel="self" type="application/atom+xml" href="https://feeds.transistor.fm/chai-chat" title="MP3 Audio"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    <podcast:podping usesPodping="true"/>
    <title>ChAI Chat</title>
    <generator>Transistor (https://transistor.fm)</generator>
    <itunes:new-feed-url>https://feeds.transistor.fm/chai-chat</itunes:new-feed-url>
    <description>ChAI Chat is your go-to podcast for navigating the fast-moving world of AI readiness, risk, and governance — exploring what it takes for organizations and communities to become truly AI-ready. In each episode, we unpack the latest developments in AI, dissect potential dangers and opportunities, and dive into how experts in AI security, compliance, and ethics build systems that are transparent, fair, and trustworthy. We bring together voices from technologists, policy thinkers, and organizational leaders to share real-world insights on governance strategies, data integrity, and ethical accountability. Whether you’re part of a startup, a nonprofit, or a large enterprise — if you care about preparing responsibly for the AI future — ChAI Chat is your space to learn, adapt, and act.</description>
    <copyright>© 2025 Jomar Gacoscos</copyright>
    <podcast:guid>deeb40f2-8572-583b-b63c-66e8516e5f74</podcast:guid>
    <podcast:locked>yes</podcast:locked>
    <language>en</language>
    <pubDate>Mon, 08 Dec 2025 21:38:15 -0800</pubDate>
    <lastBuildDate>Mon, 08 Dec 2025 21:39:07 -0800</lastBuildDate>
    
    <itunes:category text="Technology"/>
    <itunes:category text="Business"/>
    <itunes:type>episodic</itunes:type>
    <itunes:author>Jomar Gacoscos</itunes:author>
    <itunes:image href="https://img.transistorcdn.com/ZqNFX-GTFzq7pv-vP53ZiaHVoFghRPxMXAyOwYPuoEA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZjFl/ZTY1ZGNiYmY2Yjc1/YWVmYmQ5ZGU2ZGNh/YWFiMS5wbmc.jpg"/>
    <itunes:summary>ChAI Chat is your go-to podcast for navigating the fast-moving world of AI readiness, risk, and governance — exploring what it takes for organizations and communities to become truly AI-ready. In each episode, we unpack the latest developments in AI, dissect potential dangers and opportunities, and dive into how experts in AI security, compliance, and ethics build systems that are transparent, fair, and trustworthy. We bring together voices from technologists, policy thinkers, and organizational leaders to share real-world insights on governance strategies, data integrity, and ethical accountability. Whether you’re part of a startup, a nonprofit, or a large enterprise — if you care about preparing responsibly for the AI future — ChAI Chat is your space to learn, adapt, and act.</itunes:summary>
    <itunes:subtitle>ChAI Chat is your go-to podcast for navigating the fast-moving world of AI readiness, risk, and governance — exploring what it takes for organizations and communities to become truly AI-ready.</itunes:subtitle>
    <itunes:keywords>AI security, AI risk, AI compliance</itunes:keywords>
    <itunes:owner>
      <itunes:name>Jomar Gacoscos &amp; Rushabh Mehtalia</itunes:name>
    </itunes:owner>
    <itunes:complete>No</itunes:complete>
    <itunes:explicit>No</itunes:explicit>
    <item>
      <title>ChAI Chat Episode 4: Autonomous Vehicle Safety with Mateo Delgado</title>
      <itunes:episode>4</itunes:episode>
      <podcast:episode>4</podcast:episode>
      <itunes:title>ChAI Chat Episode 4: Autonomous Vehicle Safety with Mateo Delgado</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b5d41b97-02db-4bfa-8a22-4740b0c4491a</guid>
      <link>https://share.transistor.fm/s/1bc080d9</link>
      <description>
        <![CDATA[<p>In this episode, Jomar Gacoscos and guest Matteo Delgado delve into the complexities of autonomous vehicles, discussing their ethical implications, safety challenges, and the importance of infrastructure. They explore the various levels of autonomy, the role of cybersecurity, and the need for collaboration between companies and city planners to ensure safe and effective deployment of this technology. The conversation highlights the current state of autonomous driving and the hurdles that still need to be overcome for widespread adoption.</p><p>Disclaimer: The views and opinions expressed in this podcast are solely those of the host and guest and do not necessarily reflect the official policies or positions of our respective employers or affiliated organizations. The content is intended for informational and entertainment purposes only and should not be construed as professional advice.​<br>Additional Resources <br><a href="https://www.moralmachine.net/"><br>https://www.moralmachine.net/</a><br><a href="https://www.media.mit.edu/projects/moral-machine/overview/">https://www.media.mit.edu/projects/moral-machine/overview/</a><br>Phil Koopman - How Safe is Safe Enough, Measuring and Predicting Autonomous Vehicle Safety</p><p>Koopman P, Wagner M. - Autonomous vehicle safety: An interdisciplinary challenge.</p><p>Koopman P. - Challenges in Autonomous Vehicle Testing and Validation</p><p>Autonomous Vehicle Safety: Lessons from Aviation By Jaynarayan H. Lala, Carl E. Landwehr, and John F. Meyer</p><p>NHTSA (National Highway Traffic Safety Administration) - Autonomous Vehicle Safety Reports and Audits</p><p><br></p><p>Chapters</p><p>00:00 Introduction to Autonomous Vehicles<br>03:40 Understanding Autonomous Vehicle Levels<br>11:55 Defining Safety in Autonomous Driving<br>19:07 Operational Design Domains and Environmental Challenges<br>27:06 Training Autonomous Vehicles for Diverse Environments<br>33:03 Cybersecurity Challenges in Autonomous Vehicles<br>38:37 Cybersecurity Controls in Autonomous Vehicles<br>46:52 Trust and Control in Autonomous Systems<br>57:52 Collaboration Between Companies and Infrastructure<br>01:08:17 The Future of Autonomous Vehicles and Road Safety</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Jomar Gacoscos and guest Matteo Delgado delve into the complexities of autonomous vehicles, discussing their ethical implications, safety challenges, and the importance of infrastructure. They explore the various levels of autonomy, the role of cybersecurity, and the need for collaboration between companies and city planners to ensure safe and effective deployment of this technology. The conversation highlights the current state of autonomous driving and the hurdles that still need to be overcome for widespread adoption.</p><p>Disclaimer: The views and opinions expressed in this podcast are solely those of the host and guest and do not necessarily reflect the official policies or positions of our respective employers or affiliated organizations. The content is intended for informational and entertainment purposes only and should not be construed as professional advice.​<br>Additional Resources <br><a href="https://www.moralmachine.net/"><br>https://www.moralmachine.net/</a><br><a href="https://www.media.mit.edu/projects/moral-machine/overview/">https://www.media.mit.edu/projects/moral-machine/overview/</a><br>Phil Koopman - How Safe is Safe Enough, Measuring and Predicting Autonomous Vehicle Safety</p><p>Koopman P, Wagner M. - Autonomous vehicle safety: An interdisciplinary challenge.</p><p>Koopman P. - Challenges in Autonomous Vehicle Testing and Validation</p><p>Autonomous Vehicle Safety: Lessons from Aviation By Jaynarayan H. Lala, Carl E. Landwehr, and John F. Meyer</p><p>NHTSA (National Highway Traffic Safety Administration) - Autonomous Vehicle Safety Reports and Audits</p><p><br></p><p>Chapters</p><p>00:00 Introduction to Autonomous Vehicles<br>03:40 Understanding Autonomous Vehicle Levels<br>11:55 Defining Safety in Autonomous Driving<br>19:07 Operational Design Domains and Environmental Challenges<br>27:06 Training Autonomous Vehicles for Diverse Environments<br>33:03 Cybersecurity Challenges in Autonomous Vehicles<br>38:37 Cybersecurity Controls in Autonomous Vehicles<br>46:52 Trust and Control in Autonomous Systems<br>57:52 Collaboration Between Companies and Infrastructure<br>01:08:17 The Future of Autonomous Vehicles and Road Safety</p>]]>
      </content:encoded>
      <pubDate>Sun, 13 Apr 2025 20:41:03 -0700</pubDate>
      <author>Jomar Gacoscos</author>
      <enclosure url="https://media.transistor.fm/1bc080d9/36b2ef63.mp3" length="74479794" type="audio/mpeg"/>
      <itunes:author>Jomar Gacoscos</itunes:author>
      <itunes:image href="https://img.transistor.fm/x78K8glCjj7bffoR013myILmUiWgf-R1Y5wArkJh89w/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNmQx/N2E2OTdhYmJmYjUx/MDFmNTA4ZWMyMjZl/YTJmOS5wbmc.jpg"/>
      <itunes:duration>4654</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Jomar Gacoscos and guest Matteo Delgado delve into the complexities of autonomous vehicles, discussing their ethical implications, safety challenges, and the importance of infrastructure. They explore the various levels of autonomy, the role of cybersecurity, and the need for collaboration between companies and city planners to ensure safe and effective deployment of this technology. The conversation highlights the current state of autonomous driving and the hurdles that still need to be overcome for widespread adoption.</p><p>Disclaimer: The views and opinions expressed in this podcast are solely those of the host and guest and do not necessarily reflect the official policies or positions of our respective employers or affiliated organizations. The content is intended for informational and entertainment purposes only and should not be construed as professional advice.​<br>Additional Resources <br><a href="https://www.moralmachine.net/"><br>https://www.moralmachine.net/</a><br><a href="https://www.media.mit.edu/projects/moral-machine/overview/">https://www.media.mit.edu/projects/moral-machine/overview/</a><br>Phil Koopman - How Safe is Safe Enough, Measuring and Predicting Autonomous Vehicle Safety</p><p>Koopman P, Wagner M. - Autonomous vehicle safety: An interdisciplinary challenge.</p><p>Koopman P. - Challenges in Autonomous Vehicle Testing and Validation</p><p>Autonomous Vehicle Safety: Lessons from Aviation By Jaynarayan H. Lala, Carl E. Landwehr, and John F. Meyer</p><p>NHTSA (National Highway Traffic Safety Administration) - Autonomous Vehicle Safety Reports and Audits</p><p><br></p><p>Chapters</p><p>00:00 Introduction to Autonomous Vehicles<br>03:40 Understanding Autonomous Vehicle Levels<br>11:55 Defining Safety in Autonomous Driving<br>19:07 Operational Design Domains and Environmental Challenges<br>27:06 Training Autonomous Vehicles for Diverse Environments<br>33:03 Cybersecurity Challenges in Autonomous Vehicles<br>38:37 Cybersecurity Controls in Autonomous Vehicles<br>46:52 Trust and Control in Autonomous Systems<br>57:52 Collaboration Between Companies and Infrastructure<br>01:08:17 The Future of Autonomous Vehicles and Road Safety</p>]]>
      </itunes:summary>
      <itunes:keywords>autonomous vehicles, safety, ethical dilemmas, technology, cybersecurity, infrastructure, driving, innovation, AI, transportation</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>ChAI Chat Episode 3: Voice Cloning and AI Safety</title>
      <itunes:episode>3</itunes:episode>
      <podcast:episode>3</podcast:episode>
      <itunes:title>ChAI Chat Episode 3: Voice Cloning and AI Safety</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">eca16f8a-9448-4e77-98e4-f47bb3baabff</guid>
      <link>https://share.transistor.fm/s/9b86776d</link>
      <description>
        <![CDATA[<p>In this episode of the Chai Chat podcast, Jomar Gacoscos explores the fascinating world of AI voice cloning and synthetic voice technology. He discusses its applications in gaming and media, the potential risks associated with voice theft and identity fraud, and the importance of AI safety. The conversation highlights the rapid evolution of voice cloning technology and the need for preventive measures to mitigate its misuse.</p><p>Chapters</p><p>00:00 Introduction to AI Voice Cloning<br>03:51 The Dark Side of Voice Cloning<br>06:22 How Eleven Labs is Trying to Prevent Voice Theft and Ensuring Safety<br>08:03 Conclusion and Reflection on AI Voice Technology</p><p>Show Notes</p><ul><li><a href="https://elevenlabs.io/">https://elevenlabs.io/</a></li><li><a href="https://www.musicradar.com/music-industry/sony-music-says-it-has-removed-over-75-000-deepfake-tracks-from-streaming-platforms-which-include-voice-models-of-beyonce-harry-styles-and-queen">Sony Music says it has removed over 75,000 deepfake tracks from streaming platforms, which include voice models of Beyoncé, Harry Styles and Queen</a></li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of the Chai Chat podcast, Jomar Gacoscos explores the fascinating world of AI voice cloning and synthetic voice technology. He discusses its applications in gaming and media, the potential risks associated with voice theft and identity fraud, and the importance of AI safety. The conversation highlights the rapid evolution of voice cloning technology and the need for preventive measures to mitigate its misuse.</p><p>Chapters</p><p>00:00 Introduction to AI Voice Cloning<br>03:51 The Dark Side of Voice Cloning<br>06:22 How Eleven Labs is Trying to Prevent Voice Theft and Ensuring Safety<br>08:03 Conclusion and Reflection on AI Voice Technology</p><p>Show Notes</p><ul><li><a href="https://elevenlabs.io/">https://elevenlabs.io/</a></li><li><a href="https://www.musicradar.com/music-industry/sony-music-says-it-has-removed-over-75-000-deepfake-tracks-from-streaming-platforms-which-include-voice-models-of-beyonce-harry-styles-and-queen">Sony Music says it has removed over 75,000 deepfake tracks from streaming platforms, which include voice models of Beyoncé, Harry Styles and Queen</a></li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Sat, 22 Mar 2025 10:55:17 -0700</pubDate>
      <author>Jomar Gacoscos</author>
      <enclosure url="https://media.transistor.fm/9b86776d/c62f5826.mp3" length="8561342" type="audio/mpeg"/>
      <itunes:author>Jomar Gacoscos</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/M1YK4pimsMhQIl5dHUfN_JfV7Tx_KtD_SzIX1Lwgyf4/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82OTBh/MThlMmU5YjcwNDI4/MzQ0N2I4NDQzZTFh/MGIyMi5wbmc.jpg"/>
      <itunes:duration>534</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of the Chai Chat podcast, Jomar Gacoscos explores the fascinating world of AI voice cloning and synthetic voice technology. He discusses its applications in gaming and media, the potential risks associated with voice theft and identity fraud, and the importance of AI safety. The conversation highlights the rapid evolution of voice cloning technology and the need for preventive measures to mitigate its misuse.</p><p>Chapters</p><p>00:00 Introduction to AI Voice Cloning<br>03:51 The Dark Side of Voice Cloning<br>06:22 How Eleven Labs is Trying to Prevent Voice Theft and Ensuring Safety<br>08:03 Conclusion and Reflection on AI Voice Technology</p><p>Show Notes</p><ul><li><a href="https://elevenlabs.io/">https://elevenlabs.io/</a></li><li><a href="https://www.musicradar.com/music-industry/sony-music-says-it-has-removed-over-75-000-deepfake-tracks-from-streaming-platforms-which-include-voice-models-of-beyonce-harry-styles-and-queen">Sony Music says it has removed over 75,000 deepfake tracks from streaming platforms, which include voice models of Beyoncé, Harry Styles and Queen</a></li></ul><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords>AI security, AI risk, AI compliance</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>ChAI Chat Episode 2: AI Risk - An Overiew</title>
      <itunes:episode>2</itunes:episode>
      <podcast:episode>2</podcast:episode>
      <itunes:title>ChAI Chat Episode 2: AI Risk - An Overiew</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2ffed3c4-bf5c-4a01-99ac-9dc940ff35e2</guid>
      <link>https://share.transistor.fm/s/fbd07541</link>
      <description>
        <![CDATA[<p>In the second episode of <em>ChAI Chat</em>, host Jomar Gacoscos, an information security professional, explores AI risks and safety concerns in an era of rapid technological advancement. He discusses how AI can be manipulated, citing examples like the "Do Anything Now" (DAN) prompt, which bypassed ChatGPT’s safeguards, and a Chevrolet dealership chatbot tricked into making heavily discounted, and supposedly legally binding agreements. The episode also highlights AI hallucinations in OpenAI’s Whisper transcription tool, which has been found to fabricate medical transcriptions with potentially dangerous consequences. Gacoscos emphasizes the importance of learning from real-world case studies and plans to feature guest experts to discuss AI security challenges and mitigation strategies.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In the second episode of <em>ChAI Chat</em>, host Jomar Gacoscos, an information security professional, explores AI risks and safety concerns in an era of rapid technological advancement. He discusses how AI can be manipulated, citing examples like the "Do Anything Now" (DAN) prompt, which bypassed ChatGPT’s safeguards, and a Chevrolet dealership chatbot tricked into making heavily discounted, and supposedly legally binding agreements. The episode also highlights AI hallucinations in OpenAI’s Whisper transcription tool, which has been found to fabricate medical transcriptions with potentially dangerous consequences. Gacoscos emphasizes the importance of learning from real-world case studies and plans to feature guest experts to discuss AI security challenges and mitigation strategies.</p>]]>
      </content:encoded>
      <pubDate>Fri, 14 Mar 2025 05:00:00 -0700</pubDate>
      <author>Jomar Gacoscos</author>
      <enclosure url="https://media.transistor.fm/fbd07541/ac31e715.mp3" length="4216664" type="audio/mpeg"/>
      <itunes:author>Jomar Gacoscos</itunes:author>
      <itunes:image href="https://img.transistor.fm/gAws9YxzYbrRtm6GI5GrvILL1v5pCstsaEectKtzSNM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81Nzc2/MDg0Y2MwYmNjZTJl/ZTFmZTI3YjY3ODJm/ZGFhNS5wbmc.jpg"/>
      <itunes:duration>262</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In the second episode of <em>ChAI Chat</em>, host Jomar Gacoscos, an information security professional, explores AI risks and safety concerns in an era of rapid technological advancement. He discusses how AI can be manipulated, citing examples like the "Do Anything Now" (DAN) prompt, which bypassed ChatGPT’s safeguards, and a Chevrolet dealership chatbot tricked into making heavily discounted, and supposedly legally binding agreements. The episode also highlights AI hallucinations in OpenAI’s Whisper transcription tool, which has been found to fabricate medical transcriptions with potentially dangerous consequences. Gacoscos emphasizes the importance of learning from real-world case studies and plans to feature guest experts to discuss AI security challenges and mitigation strategies.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI security, AI risk, AI compliance</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>ChAI Chat Episode 1: Ads Dawson - AI Security and OWASP Top 10 for LLMs</title>
      <itunes:episode>1</itunes:episode>
      <podcast:episode>1</podcast:episode>
      <itunes:title>ChAI Chat Episode 1: Ads Dawson - AI Security and OWASP Top 10 for LLMs</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c4a0cbef-96f3-4e5b-880e-32efb3a3612b</guid>
      <link>https://youtu.be/J7gZbO5EzhE</link>
      <description>
        <![CDATA[<p>In this episode of the ChAI Chat podcast, host Jomar Gacoscos welcomes Ads Dawson, a Staff AI Security Researcher. They discuss their first meeting at DEFCON, Ads' journey from information security to AI security. Ads shares his insights on his contributions to OWASP and the ethical considerations surrounding AI security, emphasizing the importance of understanding vulnerabilities in AI applications. Ads and Jomar Gacoscos also delve into the complexities of vulnerabilities in LLM applications, particularly focusing on OWASP top 10 vulnerabilities for LLMs, particularly LLM08: Excessive Agency and LLM10: Model Theft. They discuss the implications of Anthropic's new computer use feature and the associated security risks. </p><p>Resources and Links</p><p><br></p><p>Podcast Guest, AI Security Researcher Ads Dawson (aka GangGreenTemperTatum)</p><p><a href="https://www.linkedin.com/in/adamdawson0/">https://www.linkedin.com/in/adamdawson0/</a> </p><p><a href="https://github.com/GangGreenTemperTatum">https://github.com/GangGreenTemperTatum</a> </p><p><br></p><p>AI Security Researcher Johann Rehberger (aka Embrace The Red)</p><p><a href="https://embracethered.com/blog/">https://embracethered.com/blog/</a> </p><p><a href="https://x.com/wunderwuzzi23">https://x.com/wunderwuzzi23</a> </p><p><br></p><p>OWASP Top 10 for LLMs</p><p><a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/">https://owasp.org/www-project-top-10-for-large-language-model-applications/</a> </p><p><br></p><p>Proof Pudding (CVE-2019-20634)</p><p><a href="https://avidml.org/database/avid-2023-v009/">https://avidml.org/database/avid-2023-v009/</a> </p><p><a href="https://github.com/moohax/Proof-Pudding">https://github.com/moohax/Proof-Pudding</a> </p><p><br></p><p>Chapters</p><p>0:00 Introduction to the ChAI Chat Podcast<br>2:47 Meeting at DefCon - A Unique Experience<br>5:46 Ads Dawson's Journey into AI Security<br>8:53 Transition into AI Security<br>11:49 Understand Cybersecurity and AI Security Intersections<br>14:58 Contribution to OWASP and AI Security Projects<br>17:36 Exploring Vulnerabilities in AI Applications<br>23:15 Understanding OWASP Vulnerabilities in LLM Applications<br>23:53 Exploring Excessive Agency Vulnerability (LLM08)<br>28:10 Model Theft (LLM10) and Its Implications <br>34:23 Anthropic's Computer Use Feature and Security Risks<br>42:54 Community Engagement and Networking in InfoSec</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of the ChAI Chat podcast, host Jomar Gacoscos welcomes Ads Dawson, a Staff AI Security Researcher. They discuss their first meeting at DEFCON, Ads' journey from information security to AI security. Ads shares his insights on his contributions to OWASP and the ethical considerations surrounding AI security, emphasizing the importance of understanding vulnerabilities in AI applications. Ads and Jomar Gacoscos also delve into the complexities of vulnerabilities in LLM applications, particularly focusing on OWASP top 10 vulnerabilities for LLMs, particularly LLM08: Excessive Agency and LLM10: Model Theft. They discuss the implications of Anthropic's new computer use feature and the associated security risks. </p><p>Resources and Links</p><p><br></p><p>Podcast Guest, AI Security Researcher Ads Dawson (aka GangGreenTemperTatum)</p><p><a href="https://www.linkedin.com/in/adamdawson0/">https://www.linkedin.com/in/adamdawson0/</a> </p><p><a href="https://github.com/GangGreenTemperTatum">https://github.com/GangGreenTemperTatum</a> </p><p><br></p><p>AI Security Researcher Johann Rehberger (aka Embrace The Red)</p><p><a href="https://embracethered.com/blog/">https://embracethered.com/blog/</a> </p><p><a href="https://x.com/wunderwuzzi23">https://x.com/wunderwuzzi23</a> </p><p><br></p><p>OWASP Top 10 for LLMs</p><p><a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/">https://owasp.org/www-project-top-10-for-large-language-model-applications/</a> </p><p><br></p><p>Proof Pudding (CVE-2019-20634)</p><p><a href="https://avidml.org/database/avid-2023-v009/">https://avidml.org/database/avid-2023-v009/</a> </p><p><a href="https://github.com/moohax/Proof-Pudding">https://github.com/moohax/Proof-Pudding</a> </p><p><br></p><p>Chapters</p><p>0:00 Introduction to the ChAI Chat Podcast<br>2:47 Meeting at DefCon - A Unique Experience<br>5:46 Ads Dawson's Journey into AI Security<br>8:53 Transition into AI Security<br>11:49 Understand Cybersecurity and AI Security Intersections<br>14:58 Contribution to OWASP and AI Security Projects<br>17:36 Exploring Vulnerabilities in AI Applications<br>23:15 Understanding OWASP Vulnerabilities in LLM Applications<br>23:53 Exploring Excessive Agency Vulnerability (LLM08)<br>28:10 Model Theft (LLM10) and Its Implications <br>34:23 Anthropic's Computer Use Feature and Security Risks<br>42:54 Community Engagement and Networking in InfoSec</p>]]>
      </content:encoded>
      <pubDate>Tue, 19 Nov 2024 10:00:00 -0800</pubDate>
      <author>Jomar Gacoscos</author>
      <enclosure url="https://media.transistor.fm/88724d08/458f90c1.mp3" length="43786407" type="audio/mpeg"/>
      <itunes:author>Jomar Gacoscos</itunes:author>
      <itunes:duration>2736</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of the ChAI Chat podcast, host Jomar Gacoscos welcomes Ads Dawson, a Staff AI Security Researcher. They discuss their first meeting at DEFCON, Ads' journey from information security to AI security. Ads shares his insights on his contributions to OWASP and the ethical considerations surrounding AI security, emphasizing the importance of understanding vulnerabilities in AI applications. Ads and Jomar Gacoscos also delve into the complexities of vulnerabilities in LLM applications, particularly focusing on OWASP top 10 vulnerabilities for LLMs, particularly LLM08: Excessive Agency and LLM10: Model Theft. They discuss the implications of Anthropic's new computer use feature and the associated security risks. </p><p>Resources and Links</p><p><br></p><p>Podcast Guest, AI Security Researcher Ads Dawson (aka GangGreenTemperTatum)</p><p><a href="https://www.linkedin.com/in/adamdawson0/">https://www.linkedin.com/in/adamdawson0/</a> </p><p><a href="https://github.com/GangGreenTemperTatum">https://github.com/GangGreenTemperTatum</a> </p><p><br></p><p>AI Security Researcher Johann Rehberger (aka Embrace The Red)</p><p><a href="https://embracethered.com/blog/">https://embracethered.com/blog/</a> </p><p><a href="https://x.com/wunderwuzzi23">https://x.com/wunderwuzzi23</a> </p><p><br></p><p>OWASP Top 10 for LLMs</p><p><a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/">https://owasp.org/www-project-top-10-for-large-language-model-applications/</a> </p><p><br></p><p>Proof Pudding (CVE-2019-20634)</p><p><a href="https://avidml.org/database/avid-2023-v009/">https://avidml.org/database/avid-2023-v009/</a> </p><p><a href="https://github.com/moohax/Proof-Pudding">https://github.com/moohax/Proof-Pudding</a> </p><p><br></p><p>Chapters</p><p>0:00 Introduction to the ChAI Chat Podcast<br>2:47 Meeting at DefCon - A Unique Experience<br>5:46 Ads Dawson's Journey into AI Security<br>8:53 Transition into AI Security<br>11:49 Understand Cybersecurity and AI Security Intersections<br>14:58 Contribution to OWASP and AI Security Projects<br>17:36 Exploring Vulnerabilities in AI Applications<br>23:15 Understanding OWASP Vulnerabilities in LLM Applications<br>23:53 Exploring Excessive Agency Vulnerability (LLM08)<br>28:10 Model Theft (LLM10) and Its Implications <br>34:23 Anthropic's Computer Use Feature and Security Risks<br>42:54 Community Engagement and Networking in InfoSec</p>]]>
      </itunes:summary>
      <itunes:keywords>ChAI Chat, DEFCON, AI Security, Cybersecurity, Networking, Vulnerabilities, Machine Learning, Ethical AI, Podcast, OWASP, LLM applications, excessive agency, model theft, Anthropic, security risks, AI security, community engagement, InfoSec, vulnerabilities</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
  </channel>
</rss>
