<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/stylesheet.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link rel="self" type="application/rss+xml" href="https://feeds.transistor.fm/ai-security-ops" title="MP3 Audio"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    <podcast:podping usesPodping="true"/>
    <title>AI Security Ops</title>
    <generator>Transistor (https://transistor.fm)</generator>
    <itunes:new-feed-url>https://feeds.transistor.fm/ai-security-ops</itunes:new-feed-url>
    <description>Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).
</description>
    <copyright>© 2025 Black Hills Information Security</copyright>
    <podcast:guid>0543e6d6-b875-52ac-bb17-d2b531254cfe</podcast:guid>
    <podcast:podroll>
      <podcast:remoteItem feedGuid="7009cf9d-3ce4-5dc6-8ada-5fc1dabb887b" feedUrl="https://feeds.transistor.fm/talkin-bout-infosec-news"/>
    </podcast:podroll>
    <podcast:locked>yes</podcast:locked>
    <podcast:trailer pubdate="Wed, 24 Dec 2025 08:00:00 -0500" url="https://media.transistor.fm/316653b6/6ebca129.mp3" length="4169873" type="audio/mpeg">AI Security Ops - Why Did We Create This Podcast? | Podcast Trailer</podcast:trailer>
    <language>en-us</language>
    <pubDate>Fri, 27 Mar 2026 15:30:15 -0400</pubDate>
    <lastBuildDate>Sat, 28 Mar 2026 01:04:19 -0400</lastBuildDate>
    <link>https://aisecurityops.transistor.fm</link>
    
    <itunes:category text="Education"/>
    <itunes:category text="News">
      <itunes:category text="Tech News"/>
    </itunes:category>
    <itunes:type>episodic</itunes:type>
    <itunes:author>Black Hills Information Security</itunes:author>
    <itunes:image href="https://img.transistorcdn.com/9ziXpU-Xx8IwFIqNBqmAG5wnRRzkBikBaMGdqXXSw9w/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zYjBm/MzE1MWI2YmE4ZGJh/MDQ3MmJkMTkxZGNl/MjBjNS5wbmc.jpg"/>
    <itunes:summary>Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).
</itunes:summary>
    <itunes:subtitle>Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).</itunes:subtitle>
    <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
    <itunes:owner>
      <itunes:name>Black Hills Information Security</itunes:name>
      <itunes:email>marketing@blackhillsinfosec.com</itunes:email>
    </itunes:owner>
    <itunes:complete>No</itunes:complete>
    <itunes:explicit>No</itunes:explicit>
    <item>
      <title>Embedding Space Attacks | Episode 45</title>
      <itunes:episode>45</itunes:episode>
      <podcast:episode>45</podcast:episode>
      <itunes:title>Embedding Space Attacks | Episode 45</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e1e0a047-8183-4ad1-9a57-44d1f7d8c104</guid>
      <link>https://share.transistor.fm/s/61d8bc24</link>
      <description>
        <![CDATA[<p>In this episode of BHIS Presents: AI Security Ops, the team explores embedding space attacks — a lesser-known but increasingly important threat in modern AI systems — and how attackers can manipulate the mathematical foundations of how models understand data.</p><p>Unlike prompt injection, which targets instructions, embedding attacks operate at a deeper level by influencing how data is represented, retrieved, and interpreted inside vector spaces. By subtly altering embeddings or poisoning data sources, attackers can manipulate AI behavior without ever touching the model directly.</p><p>Through a hands-on walkthrough of a custom notebook with rich visualizations, this episode breaks down how embeddings work, why they are critical to LLM-powered systems like RAG pipelines, and how attackers can exploit them in real-world scenarios.</p><p>We dig into:<br>- What embeddings are and how AI systems convert text into numerical representations<br>- How vector spaces enable similarity search and retrieval in LLM applications<br>- What embedding space attacks are and why they matter for AI security<br>- How small perturbations in data can drastically change model behavior<br>- The risks of poisoned data in RAG and vector databases<br>- How attackers can influence search results and downstream AI outputs<br>- Why these attacks are subtle, hard to detect, and often overlooked<br>- The role of visualization in understanding embedding behavior<br>- Real-world implications for AI-powered applications and workflows<br>- Defensive considerations when building with embeddings and vector stores</p><p>This episode focuses on the foundational layer of AI systems, showing how security risks extend beyond prompts and into the underlying data representations that power modern AI.</p><p>⸻</p><p>📚 Key Concepts Covered</p><p>AI Foundations<br>- Embeddings and vector representations<br>- Similarity search and vector space reasoning</p><p>AI Security Risks<br>- Embedding space manipulation<br>- Data poisoning in vector databases<br>- Retrieval manipulation in RAG systems</p><p>Applications &amp; Impact<br>- LLM-powered search and assistants<br>- AI pipelines using embeddings<br>- Risks in production AI systems</p><p>#AISecurity #Embeddings #CyberSecurity #LLMSecurity #AIThreats #BHIS #AIAgents #ArtificialIntelligence #InfoSec</p><p>Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. <br><a href="https://discord.gg/bhis">https://discord.gg/bhis</a></p><p><strong><ul><li>(00:00) - Intro &amp; Episode Overview</li>
<li>(01:39) - What Are Embeddings? (AI Only Understands Numbers)</li>
<li>(03:44) - The Embedding Process (Text → Vectors)</li>
<li>(07:43) - Similarity, Classification &amp; Vector Math</li>
<li>(09:55) - Visualizing Embedding Space (2D Projection)</li>
<li>(14:29) - Classifiers</li>
<li>(15:39) - Playing Games with Information</li>
<li>(18:06) - Attack Techniques: Synonyms &amp; Context Manipulation</li>
<li>(20:29) - Context Padding</li>
<li>(27:10) - Collision Attacks, Defenses &amp; Final Thoughts</li>
</ul><br><a href="https://www.youtube.com/watch?v=MO08G1z6-II" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><strong><a href="https://share.transistor.fm/s/61d8bc24/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</strong><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of BHIS Presents: AI Security Ops, the team explores embedding space attacks — a lesser-known but increasingly important threat in modern AI systems — and how attackers can manipulate the mathematical foundations of how models understand data.</p><p>Unlike prompt injection, which targets instructions, embedding attacks operate at a deeper level by influencing how data is represented, retrieved, and interpreted inside vector spaces. By subtly altering embeddings or poisoning data sources, attackers can manipulate AI behavior without ever touching the model directly.</p><p>Through a hands-on walkthrough of a custom notebook with rich visualizations, this episode breaks down how embeddings work, why they are critical to LLM-powered systems like RAG pipelines, and how attackers can exploit them in real-world scenarios.</p><p>We dig into:<br>- What embeddings are and how AI systems convert text into numerical representations<br>- How vector spaces enable similarity search and retrieval in LLM applications<br>- What embedding space attacks are and why they matter for AI security<br>- How small perturbations in data can drastically change model behavior<br>- The risks of poisoned data in RAG and vector databases<br>- How attackers can influence search results and downstream AI outputs<br>- Why these attacks are subtle, hard to detect, and often overlooked<br>- The role of visualization in understanding embedding behavior<br>- Real-world implications for AI-powered applications and workflows<br>- Defensive considerations when building with embeddings and vector stores</p><p>This episode focuses on the foundational layer of AI systems, showing how security risks extend beyond prompts and into the underlying data representations that power modern AI.</p><p>⸻</p><p>📚 Key Concepts Covered</p><p>AI Foundations<br>- Embeddings and vector representations<br>- Similarity search and vector space reasoning</p><p>AI Security Risks<br>- Embedding space manipulation<br>- Data poisoning in vector databases<br>- Retrieval manipulation in RAG systems</p><p>Applications &amp; Impact<br>- LLM-powered search and assistants<br>- AI pipelines using embeddings<br>- Risks in production AI systems</p><p>#AISecurity #Embeddings #CyberSecurity #LLMSecurity #AIThreats #BHIS #AIAgents #ArtificialIntelligence #InfoSec</p><p>Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. <br><a href="https://discord.gg/bhis">https://discord.gg/bhis</a></p><p><strong><ul><li>(00:00) - Intro &amp; Episode Overview</li>
<li>(01:39) - What Are Embeddings? (AI Only Understands Numbers)</li>
<li>(03:44) - The Embedding Process (Text → Vectors)</li>
<li>(07:43) - Similarity, Classification &amp; Vector Math</li>
<li>(09:55) - Visualizing Embedding Space (2D Projection)</li>
<li>(14:29) - Classifiers</li>
<li>(15:39) - Playing Games with Information</li>
<li>(18:06) - Attack Techniques: Synonyms &amp; Context Manipulation</li>
<li>(20:29) - Context Padding</li>
<li>(27:10) - Collision Attacks, Defenses &amp; Final Thoughts</li>
</ul><br><a href="https://www.youtube.com/watch?v=MO08G1z6-II" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><strong><a href="https://share.transistor.fm/s/61d8bc24/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</strong><br></p>]]>
      </content:encoded>
      <pubDate>Thu, 26 Mar 2026 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/61d8bc24/311b5a4c.mp3" length="33088035" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/xdFNt1Mi2dfJ8haa2BPhMZrCbFWM9T49rV1z1WrAwjM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85MjBm/OGQ5YWNhZWM0Yzg0/NzI1NTNjZDNkNDQx/Y2RhMi5wbmc.jpg"/>
      <itunes:duration>1985</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of BHIS Presents: AI Security Ops, the team explores embedding space attacks — a lesser-known but increasingly important threat in modern AI systems — and how attackers can manipulate the mathematical foundations of how models understand data.</p><p>Unlike prompt injection, which targets instructions, embedding attacks operate at a deeper level by influencing how data is represented, retrieved, and interpreted inside vector spaces. By subtly altering embeddings or poisoning data sources, attackers can manipulate AI behavior without ever touching the model directly.</p><p>Through a hands-on walkthrough of a custom notebook with rich visualizations, this episode breaks down how embeddings work, why they are critical to LLM-powered systems like RAG pipelines, and how attackers can exploit them in real-world scenarios.</p><p>We dig into:<br>- What embeddings are and how AI systems convert text into numerical representations<br>- How vector spaces enable similarity search and retrieval in LLM applications<br>- What embedding space attacks are and why they matter for AI security<br>- How small perturbations in data can drastically change model behavior<br>- The risks of poisoned data in RAG and vector databases<br>- How attackers can influence search results and downstream AI outputs<br>- Why these attacks are subtle, hard to detect, and often overlooked<br>- The role of visualization in understanding embedding behavior<br>- Real-world implications for AI-powered applications and workflows<br>- Defensive considerations when building with embeddings and vector stores</p><p>This episode focuses on the foundational layer of AI systems, showing how security risks extend beyond prompts and into the underlying data representations that power modern AI.</p><p>⸻</p><p>📚 Key Concepts Covered</p><p>AI Foundations<br>- Embeddings and vector representations<br>- Similarity search and vector space reasoning</p><p>AI Security Risks<br>- Embedding space manipulation<br>- Data poisoning in vector databases<br>- Retrieval manipulation in RAG systems</p><p>Applications &amp; Impact<br>- LLM-powered search and assistants<br>- AI pipelines using embeddings<br>- Risks in production AI systems</p><p>#AISecurity #Embeddings #CyberSecurity #LLMSecurity #AIThreats #BHIS #AIAgents #ArtificialIntelligence #InfoSec</p><p>Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. <br><a href="https://discord.gg/bhis">https://discord.gg/bhis</a></p><p><strong><ul><li>(00:00) - Intro &amp; Episode Overview</li>
<li>(01:39) - What Are Embeddings? (AI Only Understands Numbers)</li>
<li>(03:44) - The Embedding Process (Text → Vectors)</li>
<li>(07:43) - Similarity, Classification &amp; Vector Math</li>
<li>(09:55) - Visualizing Embedding Space (2D Projection)</li>
<li>(14:29) - Classifiers</li>
<li>(15:39) - Playing Games with Information</li>
<li>(18:06) - Attack Techniques: Synonyms &amp; Context Manipulation</li>
<li>(20:29) - Context Padding</li>
<li>(27:10) - Collision Attacks, Defenses &amp; Final Thoughts</li>
</ul><br><a href="https://www.youtube.com/watch?v=MO08G1z6-II" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><strong><a href="https://share.transistor.fm/s/61d8bc24/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</strong><br></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/61d8bc24/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/61d8bc24/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/61d8bc24/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/61d8bc24/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/61d8bc24/transcription" type="text/html"/>
      <podcast:chapters url="https://share.transistor.fm/s/61d8bc24/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Indirect Prompt Injection | Episode 44</title>
      <itunes:episode>44</itunes:episode>
      <podcast:episode>44</podcast:episode>
      <itunes:title>Indirect Prompt Injection | Episode 44</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f970a7c3-c0c6-47fa-b902-ca65c24168c1</guid>
      <link>https://share.transistor.fm/s/e2e31bd3</link>
      <description>
        <![CDATA[<p>In this episode of BHIS Presents: AI Security Ops, the team breaks down indirect prompt injection — the #1 risk in the OWASP Top 10 for LLM Applications — and why it represents one of the most dangerous and misunderstood threats in modern AI systems.</p><p>Unlike traditional attacks, indirect prompt injection doesn’t require malware, credentials, or even user interaction. Instead, attackers hide malicious instructions inside everyday content like emails, documents, or web pages — and wait for AI systems to unknowingly execute them.</p><p>From real-world exploits like EchoLeak to in-the-wild attacks observed by Palo Alto Unit 42, this episode explores how attackers are already abusing AI-powered tools in production environments — and why current defenses are struggling to keep up.</p><p>We dig into:<br>• What indirect prompt injection is and how it differs from direct attacks<br>• Why OWASP ranks prompt injection as the #1 LLM security risk<br>• How attackers hide payloads inside emails, documents, and web content<br>• The EchoLeak zero-click exploit against Microsoft 365 Copilot<br>• Web-based prompt injection attacks observed in the wild (Unit 42)<br>• Exploits targeting AI coding tools like Cursor IDE and GitHub Copilot<br>• How RAG systems amplify the risk through poisoned knowledge bases<br>• Why LLM architecture makes this problem fundamentally hard to solve<br>• Research showing modern defenses still fail 50%+ of the time<br>• Practical mitigation strategies: least privilege, human-in-the-loop, and observability</p><p>This episode focuses on the real-world security implications of AI adoption, showing how attackers are already leveraging these techniques — and what defenders need to understand as AI becomes deeply embedded in business workflows.</p><p>⸻</p><p>📚 Key References</p><p>Prompt Injection &amp; LLM Risk<br>• OWASP Top 10 for LLM Applications 2025 — <a href="https://owasp.org">https://owasp.org</a></p><p>Real-World Attacks<br>• EchoLeak (CVE-2025-32711) — Aim Security / arXiv<br>• Unit 42 — Web-Based Indirect Prompt Injection in the Wild (March 2026) — <a href="https://unit42.paloaltonetworks.com">https://unit42.paloaltonetworks.com</a></p><p>AI System Vulnerabilities<br>• Cursor IDE (CVE-2025-59944)<br>• GitHub Copilot (CVE-2025-53773)<br>• Lakera — Zero-Click MCP Attack — <a href="https://lakera.ai">https://lakera.ai</a></p><p>Research on Defenses<br>• Zhan et al. — Adaptive Attacks Break Defenses (NAACL 2025)<br>• Anthropic System Card (Feb 2026)<br>• Google Gemini Security Research (2025)</p><p>Standards &amp; Guidance<br>• NIST AI Risk Management Framework — <a href="https://nist.gov">https://nist.gov</a><br>• MITRE ATLAS — <a href="https://atlas.mitre.org">https://atlas.mitre.org</a><br>• ISO/IEC 42001 AI Management Systems</p><p>#AISecurity #PromptInjection #CyberSecurity #LLMSecurity #AIThreats #BHIS #AIAgents #ArtificialIntelligence #infosec </p><p><strong><ul><li>(00:00) - Intro &amp; BHIS / Antisyphon Overview</li>
<li>(01:19) - OWASP Top 10 &amp; Prompt Injection Context</li>
<li>(01:41) - Indirect Prompt Injection Explained (Stored Attack Analogy)</li>
<li>(02:54) - Real-World Attack Scenarios (Calendar &amp; Hidden Payloads)</li>
<li>(05:10) - EchoLeak &amp; Zero-Click Copilot Exploit</li>
<li>(06:10) - Weaponized Excel Prompt Injection PoC</li>
<li>(06:50) - Email Injection &amp; AI Summarization Abuse</li>
<li>(09:07) - Why Detection &amp; Prevention Are So Difficult</li>
<li>(14:02) - Mitigations &amp; Final Thoughts</li>
</ul><br><a href="https://www.youtube.com/watch?v=LaBjZUlyyM0" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><strong><a href="https://share.transistor.fm/s/e2e31bd3/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</strong><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of BHIS Presents: AI Security Ops, the team breaks down indirect prompt injection — the #1 risk in the OWASP Top 10 for LLM Applications — and why it represents one of the most dangerous and misunderstood threats in modern AI systems.</p><p>Unlike traditional attacks, indirect prompt injection doesn’t require malware, credentials, or even user interaction. Instead, attackers hide malicious instructions inside everyday content like emails, documents, or web pages — and wait for AI systems to unknowingly execute them.</p><p>From real-world exploits like EchoLeak to in-the-wild attacks observed by Palo Alto Unit 42, this episode explores how attackers are already abusing AI-powered tools in production environments — and why current defenses are struggling to keep up.</p><p>We dig into:<br>• What indirect prompt injection is and how it differs from direct attacks<br>• Why OWASP ranks prompt injection as the #1 LLM security risk<br>• How attackers hide payloads inside emails, documents, and web content<br>• The EchoLeak zero-click exploit against Microsoft 365 Copilot<br>• Web-based prompt injection attacks observed in the wild (Unit 42)<br>• Exploits targeting AI coding tools like Cursor IDE and GitHub Copilot<br>• How RAG systems amplify the risk through poisoned knowledge bases<br>• Why LLM architecture makes this problem fundamentally hard to solve<br>• Research showing modern defenses still fail 50%+ of the time<br>• Practical mitigation strategies: least privilege, human-in-the-loop, and observability</p><p>This episode focuses on the real-world security implications of AI adoption, showing how attackers are already leveraging these techniques — and what defenders need to understand as AI becomes deeply embedded in business workflows.</p><p>⸻</p><p>📚 Key References</p><p>Prompt Injection &amp; LLM Risk<br>• OWASP Top 10 for LLM Applications 2025 — <a href="https://owasp.org">https://owasp.org</a></p><p>Real-World Attacks<br>• EchoLeak (CVE-2025-32711) — Aim Security / arXiv<br>• Unit 42 — Web-Based Indirect Prompt Injection in the Wild (March 2026) — <a href="https://unit42.paloaltonetworks.com">https://unit42.paloaltonetworks.com</a></p><p>AI System Vulnerabilities<br>• Cursor IDE (CVE-2025-59944)<br>• GitHub Copilot (CVE-2025-53773)<br>• Lakera — Zero-Click MCP Attack — <a href="https://lakera.ai">https://lakera.ai</a></p><p>Research on Defenses<br>• Zhan et al. — Adaptive Attacks Break Defenses (NAACL 2025)<br>• Anthropic System Card (Feb 2026)<br>• Google Gemini Security Research (2025)</p><p>Standards &amp; Guidance<br>• NIST AI Risk Management Framework — <a href="https://nist.gov">https://nist.gov</a><br>• MITRE ATLAS — <a href="https://atlas.mitre.org">https://atlas.mitre.org</a><br>• ISO/IEC 42001 AI Management Systems</p><p>#AISecurity #PromptInjection #CyberSecurity #LLMSecurity #AIThreats #BHIS #AIAgents #ArtificialIntelligence #infosec </p><p><strong><ul><li>(00:00) - Intro &amp; BHIS / Antisyphon Overview</li>
<li>(01:19) - OWASP Top 10 &amp; Prompt Injection Context</li>
<li>(01:41) - Indirect Prompt Injection Explained (Stored Attack Analogy)</li>
<li>(02:54) - Real-World Attack Scenarios (Calendar &amp; Hidden Payloads)</li>
<li>(05:10) - EchoLeak &amp; Zero-Click Copilot Exploit</li>
<li>(06:10) - Weaponized Excel Prompt Injection PoC</li>
<li>(06:50) - Email Injection &amp; AI Summarization Abuse</li>
<li>(09:07) - Why Detection &amp; Prevention Are So Difficult</li>
<li>(14:02) - Mitigations &amp; Final Thoughts</li>
</ul><br><a href="https://www.youtube.com/watch?v=LaBjZUlyyM0" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><strong><a href="https://share.transistor.fm/s/e2e31bd3/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</strong><br></p>]]>
      </content:encoded>
      <pubDate>Thu, 19 Mar 2026 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/e2e31bd3/94d60b01.mp3" length="17119167" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/gdCBhfKGjC6a_IHi_22AEKqFtbcDfFjlT_JIY3Bxw9U/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82YWU0/ZTM5ZGIzOTcyZGIw/ZDBiODhiMTQ4MmQ2/YTlmOS5wbmc.jpg"/>
      <itunes:duration>970</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of BHIS Presents: AI Security Ops, the team breaks down indirect prompt injection — the #1 risk in the OWASP Top 10 for LLM Applications — and why it represents one of the most dangerous and misunderstood threats in modern AI systems.</p><p>Unlike traditional attacks, indirect prompt injection doesn’t require malware, credentials, or even user interaction. Instead, attackers hide malicious instructions inside everyday content like emails, documents, or web pages — and wait for AI systems to unknowingly execute them.</p><p>From real-world exploits like EchoLeak to in-the-wild attacks observed by Palo Alto Unit 42, this episode explores how attackers are already abusing AI-powered tools in production environments — and why current defenses are struggling to keep up.</p><p>We dig into:<br>• What indirect prompt injection is and how it differs from direct attacks<br>• Why OWASP ranks prompt injection as the #1 LLM security risk<br>• How attackers hide payloads inside emails, documents, and web content<br>• The EchoLeak zero-click exploit against Microsoft 365 Copilot<br>• Web-based prompt injection attacks observed in the wild (Unit 42)<br>• Exploits targeting AI coding tools like Cursor IDE and GitHub Copilot<br>• How RAG systems amplify the risk through poisoned knowledge bases<br>• Why LLM architecture makes this problem fundamentally hard to solve<br>• Research showing modern defenses still fail 50%+ of the time<br>• Practical mitigation strategies: least privilege, human-in-the-loop, and observability</p><p>This episode focuses on the real-world security implications of AI adoption, showing how attackers are already leveraging these techniques — and what defenders need to understand as AI becomes deeply embedded in business workflows.</p><p>⸻</p><p>📚 Key References</p><p>Prompt Injection &amp; LLM Risk<br>• OWASP Top 10 for LLM Applications 2025 — <a href="https://owasp.org">https://owasp.org</a></p><p>Real-World Attacks<br>• EchoLeak (CVE-2025-32711) — Aim Security / arXiv<br>• Unit 42 — Web-Based Indirect Prompt Injection in the Wild (March 2026) — <a href="https://unit42.paloaltonetworks.com">https://unit42.paloaltonetworks.com</a></p><p>AI System Vulnerabilities<br>• Cursor IDE (CVE-2025-59944)<br>• GitHub Copilot (CVE-2025-53773)<br>• Lakera — Zero-Click MCP Attack — <a href="https://lakera.ai">https://lakera.ai</a></p><p>Research on Defenses<br>• Zhan et al. — Adaptive Attacks Break Defenses (NAACL 2025)<br>• Anthropic System Card (Feb 2026)<br>• Google Gemini Security Research (2025)</p><p>Standards &amp; Guidance<br>• NIST AI Risk Management Framework — <a href="https://nist.gov">https://nist.gov</a><br>• MITRE ATLAS — <a href="https://atlas.mitre.org">https://atlas.mitre.org</a><br>• ISO/IEC 42001 AI Management Systems</p><p>#AISecurity #PromptInjection #CyberSecurity #LLMSecurity #AIThreats #BHIS #AIAgents #ArtificialIntelligence #infosec </p><p><strong><ul><li>(00:00) - Intro &amp; BHIS / Antisyphon Overview</li>
<li>(01:19) - OWASP Top 10 &amp; Prompt Injection Context</li>
<li>(01:41) - Indirect Prompt Injection Explained (Stored Attack Analogy)</li>
<li>(02:54) - Real-World Attack Scenarios (Calendar &amp; Hidden Payloads)</li>
<li>(05:10) - EchoLeak &amp; Zero-Click Copilot Exploit</li>
<li>(06:10) - Weaponized Excel Prompt Injection PoC</li>
<li>(06:50) - Email Injection &amp; AI Summarization Abuse</li>
<li>(09:07) - Why Detection &amp; Prevention Are So Difficult</li>
<li>(14:02) - Mitigations &amp; Final Thoughts</li>
</ul><br><a href="https://www.youtube.com/watch?v=LaBjZUlyyM0" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><strong><a href="https://share.transistor.fm/s/e2e31bd3/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</strong><br></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/e2e31bd3/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/e2e31bd3/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/e2e31bd3/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/e2e31bd3/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/e2e31bd3/transcription" type="text/html"/>
      <podcast:chapters url="https://share.transistor.fm/s/e2e31bd3/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Top AI Security Concerns | Episode 43</title>
      <itunes:episode>43</itunes:episode>
      <podcast:episode>43</podcast:episode>
      <itunes:title>Top AI Security Concerns | Episode 43</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">23424c64-751a-40f9-8ced-a8370fb96228</guid>
      <link>https://share.transistor.fm/s/6889addc</link>
      <description>
        <![CDATA[<p>In this episode of BHIS Presents: AI Security Ops, Bronwen Aker and Dr. Brian Fehrman break down some of the top AI security concerns being discussed by researchers, security firms, and government agencies this year.</p><p>As AI capabilities rapidly expand, so does the attack surface. From agentic AI systems being used by attackers, to deepfakes at industrial scale, to the persistent challenge of prompt injection, security teams are trying to understand what risks are real, what’s hype, and where defenders should focus first.</p><p>We dig into:<br>- Why agentic AI is emerging as a major security concern<br>- How attackers could weaponize autonomous agents to scale operations<br>- The risk of malicious agent skills and AI supply chain attacks<br>- Why overly broad permissions make agent-based systems dangerous<br>- AI-assisted phishing campaigns and social engineering at scale<br>- The rise of deepfakes and corporate fraud driven by generative AI<br>- Why humans still struggle to reliably detect deepfake media<br>- The economics of deepfake fraud and real-world incidents<br>- Prompt injection attacks and why they remain difficult to solve<br>- Whether future models may autonomously discover and exploit jailbreaks</p><p>This episode looks at the practical security implications of today’s AI ecosystem — where the biggest risks are coming from, how attackers may leverage AI systems, and what defenders should be thinking about as these technologies continue to evolve.</p><p><strong>📚 Key References</strong></p><p>Agentic AI Threats<br>- CrowdStrike 2026 Global Threat Report — <a href="https://www.crowdstrike.com">https://www.crowdstrike.com</a><br>- IBM X-Force 2026 Threat Intelligence Index — <a href="https://www.ibm.com/security/x-force">https://www.ibm.com/security/x-force</a><br>- Cisco State of AI Security 2026 — <a href="https://www.cisco.com/site/us/en/products/security/state-of-ai-security.html#tabs-9da71fbd27-item-1288c79d71-tab">https://www.cisco.com/site/us/en/products/security/state-of-ai-security.html#tabs-9da71fbd27-item-1288c79d71-tab</a></p><p>Deepfakes &amp; AI-Driven Fraud<br>- WEF Global Cybersecurity Outlook 2026 — <a href="https://www.weforum.org/publications/global-cybersecurity-outlook-2026/">https://www.weforum.org/publications/global-cybersecurity-outlook-2026/</a><br>- International AI Safety Report 2026 — <a href="https://www.internationalaisafetyreport.org">https://www.internationalaisafetyreport.org</a></p><p>AI Security &amp; Infrastructure Risk<br>- CISA Joint Guidance on AI in OT — <a href="https://www.cisa.gov/news-events/news/new-joint-guide-advances-secure-integration-artificial-intelligence-operational-technology">https://www.cisa.gov/news-events/news/new-joint-guide-advances-secure-integration-artificial-intelligence-operational-technology</a></p><p>Prompt Injection &amp; LLM Exploitation<br>- Schneier et al., “The Promptware Kill Chain” — <a href="https://www.lawfaremedia.org/article/the-promptware-kill-chain">https://www.lawfaremedia.org/article/the-promptware-kill-chain</a><br>- Palo Alto Unit 42 — “Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild”<br><a href="https://unit42.paloaltonetworks.com/indirect-prompt-injection-ai-agents/">https://unit42.paloaltonetworks.com/indirect-prompt-injection-ai-agents/</a></p><p><strong><ul><li>(00:00) - Intro &amp; Episode Overview</li>
<li>(02:18) - Agentic AI as a Security Threat (CrowdStrike 2026 Global Threat Report, IBM X-Force Index)</li>
<li>(03:46) - Malicious Agent Skills &amp; AI Supply Chain Attacks (Cisco State of AI Security)</li>
<li>(04:58) - How Agent Skills Actually Work</li>
<li>(07:47) - Permissions &amp; Guardrails for AI Agents (CISA AI in OT Guidance)</li>
<li>(09:57) - AI-Generated Phishing Campaigns (CrowdStrike / IBM Threat Reports)</li>
<li>(13:58) - Deepfakes at Industrial Scale (WEF Global Cybersecurity Outlook)</li>
<li>(15:38) - Corporate Fraud &amp; Deepfake Incidents (International AI Safety Report)</li>
<li>(17:21) - Why Humans Struggle to Detect Deepfakes</li>
<li>(21:13) - Prompt Injection Attacks Explained (Schneier – Promptware Kill Chain)</li>
<li>(24:35) - AI Models Jailbreaking Other Models (Palo Alto Unit 42 Research)</li>
<li>(28:59) - Final Thoughts &amp; Wrap-Up</li>
</ul><br><a href="https://www.youtube.com/watch?v=TlQeYJUZBO0" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><strong><a href="https://share.transistor.fm/s/6889addc/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</strong><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of BHIS Presents: AI Security Ops, Bronwen Aker and Dr. Brian Fehrman break down some of the top AI security concerns being discussed by researchers, security firms, and government agencies this year.</p><p>As AI capabilities rapidly expand, so does the attack surface. From agentic AI systems being used by attackers, to deepfakes at industrial scale, to the persistent challenge of prompt injection, security teams are trying to understand what risks are real, what’s hype, and where defenders should focus first.</p><p>We dig into:<br>- Why agentic AI is emerging as a major security concern<br>- How attackers could weaponize autonomous agents to scale operations<br>- The risk of malicious agent skills and AI supply chain attacks<br>- Why overly broad permissions make agent-based systems dangerous<br>- AI-assisted phishing campaigns and social engineering at scale<br>- The rise of deepfakes and corporate fraud driven by generative AI<br>- Why humans still struggle to reliably detect deepfake media<br>- The economics of deepfake fraud and real-world incidents<br>- Prompt injection attacks and why they remain difficult to solve<br>- Whether future models may autonomously discover and exploit jailbreaks</p><p>This episode looks at the practical security implications of today’s AI ecosystem — where the biggest risks are coming from, how attackers may leverage AI systems, and what defenders should be thinking about as these technologies continue to evolve.</p><p><strong>📚 Key References</strong></p><p>Agentic AI Threats<br>- CrowdStrike 2026 Global Threat Report — <a href="https://www.crowdstrike.com">https://www.crowdstrike.com</a><br>- IBM X-Force 2026 Threat Intelligence Index — <a href="https://www.ibm.com/security/x-force">https://www.ibm.com/security/x-force</a><br>- Cisco State of AI Security 2026 — <a href="https://www.cisco.com/site/us/en/products/security/state-of-ai-security.html#tabs-9da71fbd27-item-1288c79d71-tab">https://www.cisco.com/site/us/en/products/security/state-of-ai-security.html#tabs-9da71fbd27-item-1288c79d71-tab</a></p><p>Deepfakes &amp; AI-Driven Fraud<br>- WEF Global Cybersecurity Outlook 2026 — <a href="https://www.weforum.org/publications/global-cybersecurity-outlook-2026/">https://www.weforum.org/publications/global-cybersecurity-outlook-2026/</a><br>- International AI Safety Report 2026 — <a href="https://www.internationalaisafetyreport.org">https://www.internationalaisafetyreport.org</a></p><p>AI Security &amp; Infrastructure Risk<br>- CISA Joint Guidance on AI in OT — <a href="https://www.cisa.gov/news-events/news/new-joint-guide-advances-secure-integration-artificial-intelligence-operational-technology">https://www.cisa.gov/news-events/news/new-joint-guide-advances-secure-integration-artificial-intelligence-operational-technology</a></p><p>Prompt Injection &amp; LLM Exploitation<br>- Schneier et al., “The Promptware Kill Chain” — <a href="https://www.lawfaremedia.org/article/the-promptware-kill-chain">https://www.lawfaremedia.org/article/the-promptware-kill-chain</a><br>- Palo Alto Unit 42 — “Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild”<br><a href="https://unit42.paloaltonetworks.com/indirect-prompt-injection-ai-agents/">https://unit42.paloaltonetworks.com/indirect-prompt-injection-ai-agents/</a></p><p><strong><ul><li>(00:00) - Intro &amp; Episode Overview</li>
<li>(02:18) - Agentic AI as a Security Threat (CrowdStrike 2026 Global Threat Report, IBM X-Force Index)</li>
<li>(03:46) - Malicious Agent Skills &amp; AI Supply Chain Attacks (Cisco State of AI Security)</li>
<li>(04:58) - How Agent Skills Actually Work</li>
<li>(07:47) - Permissions &amp; Guardrails for AI Agents (CISA AI in OT Guidance)</li>
<li>(09:57) - AI-Generated Phishing Campaigns (CrowdStrike / IBM Threat Reports)</li>
<li>(13:58) - Deepfakes at Industrial Scale (WEF Global Cybersecurity Outlook)</li>
<li>(15:38) - Corporate Fraud &amp; Deepfake Incidents (International AI Safety Report)</li>
<li>(17:21) - Why Humans Struggle to Detect Deepfakes</li>
<li>(21:13) - Prompt Injection Attacks Explained (Schneier – Promptware Kill Chain)</li>
<li>(24:35) - AI Models Jailbreaking Other Models (Palo Alto Unit 42 Research)</li>
<li>(28:59) - Final Thoughts &amp; Wrap-Up</li>
</ul><br><a href="https://www.youtube.com/watch?v=TlQeYJUZBO0" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><strong><a href="https://share.transistor.fm/s/6889addc/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</strong><br></p>]]>
      </content:encoded>
      <pubDate>Thu, 12 Mar 2026 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/6889addc/7d00a38a.mp3" length="28471056" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/GZbzkS58Ga612e4R7jiJoeupHDlsT5IumEvCv1fFmfA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzY0/NzQ5YjU0Y2JjNTEx/MjEzNGU3MTU0NTcx/MjlkYS5wbmc.jpg"/>
      <itunes:duration>1751</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of BHIS Presents: AI Security Ops, Bronwen Aker and Dr. Brian Fehrman break down some of the top AI security concerns being discussed by researchers, security firms, and government agencies this year.</p><p>As AI capabilities rapidly expand, so does the attack surface. From agentic AI systems being used by attackers, to deepfakes at industrial scale, to the persistent challenge of prompt injection, security teams are trying to understand what risks are real, what’s hype, and where defenders should focus first.</p><p>We dig into:<br>- Why agentic AI is emerging as a major security concern<br>- How attackers could weaponize autonomous agents to scale operations<br>- The risk of malicious agent skills and AI supply chain attacks<br>- Why overly broad permissions make agent-based systems dangerous<br>- AI-assisted phishing campaigns and social engineering at scale<br>- The rise of deepfakes and corporate fraud driven by generative AI<br>- Why humans still struggle to reliably detect deepfake media<br>- The economics of deepfake fraud and real-world incidents<br>- Prompt injection attacks and why they remain difficult to solve<br>- Whether future models may autonomously discover and exploit jailbreaks</p><p>This episode looks at the practical security implications of today’s AI ecosystem — where the biggest risks are coming from, how attackers may leverage AI systems, and what defenders should be thinking about as these technologies continue to evolve.</p><p><strong>📚 Key References</strong></p><p>Agentic AI Threats<br>- CrowdStrike 2026 Global Threat Report — <a href="https://www.crowdstrike.com">https://www.crowdstrike.com</a><br>- IBM X-Force 2026 Threat Intelligence Index — <a href="https://www.ibm.com/security/x-force">https://www.ibm.com/security/x-force</a><br>- Cisco State of AI Security 2026 — <a href="https://www.cisco.com/site/us/en/products/security/state-of-ai-security.html#tabs-9da71fbd27-item-1288c79d71-tab">https://www.cisco.com/site/us/en/products/security/state-of-ai-security.html#tabs-9da71fbd27-item-1288c79d71-tab</a></p><p>Deepfakes &amp; AI-Driven Fraud<br>- WEF Global Cybersecurity Outlook 2026 — <a href="https://www.weforum.org/publications/global-cybersecurity-outlook-2026/">https://www.weforum.org/publications/global-cybersecurity-outlook-2026/</a><br>- International AI Safety Report 2026 — <a href="https://www.internationalaisafetyreport.org">https://www.internationalaisafetyreport.org</a></p><p>AI Security &amp; Infrastructure Risk<br>- CISA Joint Guidance on AI in OT — <a href="https://www.cisa.gov/news-events/news/new-joint-guide-advances-secure-integration-artificial-intelligence-operational-technology">https://www.cisa.gov/news-events/news/new-joint-guide-advances-secure-integration-artificial-intelligence-operational-technology</a></p><p>Prompt Injection &amp; LLM Exploitation<br>- Schneier et al., “The Promptware Kill Chain” — <a href="https://www.lawfaremedia.org/article/the-promptware-kill-chain">https://www.lawfaremedia.org/article/the-promptware-kill-chain</a><br>- Palo Alto Unit 42 — “Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild”<br><a href="https://unit42.paloaltonetworks.com/indirect-prompt-injection-ai-agents/">https://unit42.paloaltonetworks.com/indirect-prompt-injection-ai-agents/</a></p><p><strong><ul><li>(00:00) - Intro &amp; Episode Overview</li>
<li>(02:18) - Agentic AI as a Security Threat (CrowdStrike 2026 Global Threat Report, IBM X-Force Index)</li>
<li>(03:46) - Malicious Agent Skills &amp; AI Supply Chain Attacks (Cisco State of AI Security)</li>
<li>(04:58) - How Agent Skills Actually Work</li>
<li>(07:47) - Permissions &amp; Guardrails for AI Agents (CISA AI in OT Guidance)</li>
<li>(09:57) - AI-Generated Phishing Campaigns (CrowdStrike / IBM Threat Reports)</li>
<li>(13:58) - Deepfakes at Industrial Scale (WEF Global Cybersecurity Outlook)</li>
<li>(15:38) - Corporate Fraud &amp; Deepfake Incidents (International AI Safety Report)</li>
<li>(17:21) - Why Humans Struggle to Detect Deepfakes</li>
<li>(21:13) - Prompt Injection Attacks Explained (Schneier – Promptware Kill Chain)</li>
<li>(24:35) - AI Models Jailbreaking Other Models (Palo Alto Unit 42 Research)</li>
<li>(28:59) - Final Thoughts &amp; Wrap-Up</li>
</ul><br><a href="https://www.youtube.com/watch?v=TlQeYJUZBO0" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><strong><a href="https://share.transistor.fm/s/6889addc/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</strong><br></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/6889addc/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/6889addc/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/6889addc/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/6889addc/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/6889addc/transcription" type="text/html"/>
      <podcast:chapters url="https://share.transistor.fm/s/6889addc/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Claude Cowork Discussion | Episode 42</title>
      <itunes:episode>42</itunes:episode>
      <podcast:episode>42</podcast:episode>
      <itunes:title>Claude Cowork Discussion | Episode 42</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f9b57e0f-f456-4c71-83d4-74e7926b1b2d</guid>
      <link>https://share.transistor.fm/s/28f31187</link>
      <description>
        <![CDATA[<p>We discuss the meaning of AI life In episode 42 of "BHIS Presents: AI Security Ops." Derek Banks is joined by Bronwen Aker and Brian Fehrman to break down Anthropic’s latest agentic desktop experiment: Claude Cowork.</p><p>Claude Cowork brings large language models directly onto the endpoint — giving Claude the ability to read, write, and organize files on your local machine. It’s designed to make powerful AI workflows accessible to non-technical users… but as with any tool that operates at the OS level, the security implications are significant.</p><p>We explore what happens when AI moves closer to your data, your filesystem, and your browser — and what that means for defenders.</p><p>We dig into:<br>- What Claude Cowork is and how it differs from Claude Code<br>- Agentic desktop tools vs. command-line workflows<br>- Local file access and OS-level interaction risks<br>- Skills, automation, and task iteration<br>- Chrome plugins and expanded attack surface<br>- Overly broad permissions and least-privilege concerns<br>- SaaS disruption and shifting trust boundaries<br>- Endpoint monitoring challenges<br>- The speed of AI releases vs. security review cycles<br>- Balancing innovation with responsible deployment</p><p>This conversation looks at the real-world operational and defensive considerations of agentic AI tools running directly on user systems. If you’re evaluating AI productivity tools inside your organization — or defending environments where they’re already being adopted — this episode will help you think through the risks and tradeoffs.</p><p><strong><ul><li>(00:00) - Intro &amp; Episode Overview</li>
<li>(02:08) - What Is Claude Cowork?</li>
<li>(04:03) - Desktop Agents vs. Command Line Users</li>
<li>(06:12) - Agentic Workflows &amp; Task Automation</li>
<li>(08:08) - Building Fast with Claude (Speed of Development)</li>
<li>(09:29) - Browser Plugins &amp; Expanding Capabilities</li>
<li>(11:06) - Permission Models &amp; “Just Give It Access to Everything”</li>
<li>(12:40) - SaaS Disruption &amp; Enterprise Impact</li>
<li>(14:38) - Overly Broad File Access Risks</li>
<li>(16:27) - Organizational Disruption &amp; Workforce Impact</li>
<li>(18:09) - Security Lag vs. Rapid AI Releases</li>
<li>(19:46) - Final Thoughts &amp; Wrap-Up</li>
</ul><br><a href="https://www.youtube.com/watch?v=NgV72s9UoBg" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><strong><a href="https://share.transistor.fm/s/28f31187/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</strong><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>We discuss the meaning of AI life In episode 42 of "BHIS Presents: AI Security Ops." Derek Banks is joined by Bronwen Aker and Brian Fehrman to break down Anthropic’s latest agentic desktop experiment: Claude Cowork.</p><p>Claude Cowork brings large language models directly onto the endpoint — giving Claude the ability to read, write, and organize files on your local machine. It’s designed to make powerful AI workflows accessible to non-technical users… but as with any tool that operates at the OS level, the security implications are significant.</p><p>We explore what happens when AI moves closer to your data, your filesystem, and your browser — and what that means for defenders.</p><p>We dig into:<br>- What Claude Cowork is and how it differs from Claude Code<br>- Agentic desktop tools vs. command-line workflows<br>- Local file access and OS-level interaction risks<br>- Skills, automation, and task iteration<br>- Chrome plugins and expanded attack surface<br>- Overly broad permissions and least-privilege concerns<br>- SaaS disruption and shifting trust boundaries<br>- Endpoint monitoring challenges<br>- The speed of AI releases vs. security review cycles<br>- Balancing innovation with responsible deployment</p><p>This conversation looks at the real-world operational and defensive considerations of agentic AI tools running directly on user systems. If you’re evaluating AI productivity tools inside your organization — or defending environments where they’re already being adopted — this episode will help you think through the risks and tradeoffs.</p><p><strong><ul><li>(00:00) - Intro &amp; Episode Overview</li>
<li>(02:08) - What Is Claude Cowork?</li>
<li>(04:03) - Desktop Agents vs. Command Line Users</li>
<li>(06:12) - Agentic Workflows &amp; Task Automation</li>
<li>(08:08) - Building Fast with Claude (Speed of Development)</li>
<li>(09:29) - Browser Plugins &amp; Expanding Capabilities</li>
<li>(11:06) - Permission Models &amp; “Just Give It Access to Everything”</li>
<li>(12:40) - SaaS Disruption &amp; Enterprise Impact</li>
<li>(14:38) - Overly Broad File Access Risks</li>
<li>(16:27) - Organizational Disruption &amp; Workforce Impact</li>
<li>(18:09) - Security Lag vs. Rapid AI Releases</li>
<li>(19:46) - Final Thoughts &amp; Wrap-Up</li>
</ul><br><a href="https://www.youtube.com/watch?v=NgV72s9UoBg" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><strong><a href="https://share.transistor.fm/s/28f31187/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</strong><br></p>]]>
      </content:encoded>
      <pubDate>Fri, 06 Mar 2026 16:54:31 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/28f31187/5fa583f4.mp3" length="21165798" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/4n36oB-CIByDVT3svgENkpKXuIjh1EigQp3bmad_dFI/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xOGMw/MmQxZjFiNGI1Yzdh/NWIwZGM1ZDIyMjEx/YTIyOS5wbmc.jpg"/>
      <itunes:duration>1293</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>We discuss the meaning of AI life In episode 42 of "BHIS Presents: AI Security Ops." Derek Banks is joined by Bronwen Aker and Brian Fehrman to break down Anthropic’s latest agentic desktop experiment: Claude Cowork.</p><p>Claude Cowork brings large language models directly onto the endpoint — giving Claude the ability to read, write, and organize files on your local machine. It’s designed to make powerful AI workflows accessible to non-technical users… but as with any tool that operates at the OS level, the security implications are significant.</p><p>We explore what happens when AI moves closer to your data, your filesystem, and your browser — and what that means for defenders.</p><p>We dig into:<br>- What Claude Cowork is and how it differs from Claude Code<br>- Agentic desktop tools vs. command-line workflows<br>- Local file access and OS-level interaction risks<br>- Skills, automation, and task iteration<br>- Chrome plugins and expanded attack surface<br>- Overly broad permissions and least-privilege concerns<br>- SaaS disruption and shifting trust boundaries<br>- Endpoint monitoring challenges<br>- The speed of AI releases vs. security review cycles<br>- Balancing innovation with responsible deployment</p><p>This conversation looks at the real-world operational and defensive considerations of agentic AI tools running directly on user systems. If you’re evaluating AI productivity tools inside your organization — or defending environments where they’re already being adopted — this episode will help you think through the risks and tradeoffs.</p><p><strong><ul><li>(00:00) - Intro &amp; Episode Overview</li>
<li>(02:08) - What Is Claude Cowork?</li>
<li>(04:03) - Desktop Agents vs. Command Line Users</li>
<li>(06:12) - Agentic Workflows &amp; Task Automation</li>
<li>(08:08) - Building Fast with Claude (Speed of Development)</li>
<li>(09:29) - Browser Plugins &amp; Expanding Capabilities</li>
<li>(11:06) - Permission Models &amp; “Just Give It Access to Everything”</li>
<li>(12:40) - SaaS Disruption &amp; Enterprise Impact</li>
<li>(14:38) - Overly Broad File Access Risks</li>
<li>(16:27) - Organizational Disruption &amp; Workforce Impact</li>
<li>(18:09) - Security Lag vs. Rapid AI Releases</li>
<li>(19:46) - Final Thoughts &amp; Wrap-Up</li>
</ul><br><a href="https://www.youtube.com/watch?v=NgV72s9UoBg" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><strong><a href="https://share.transistor.fm/s/28f31187/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</strong><br></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/28f31187/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/28f31187/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/28f31187/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/28f31187/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/28f31187/transcription" type="text/html"/>
      <podcast:chapters url="https://share.transistor.fm/s/28f31187/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>OpenClaw and Moltbook with Guests Beau Bullock and Hayden Covington | Episode 41</title>
      <itunes:episode>41</itunes:episode>
      <podcast:episode>41</podcast:episode>
      <itunes:title>OpenClaw and Moltbook with Guests Beau Bullock and Hayden Covington | Episode 41</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fe2b32ee-0c78-4e1e-9a2f-56992086b14d</guid>
      <link>https://share.transistor.fm/s/b50dad42</link>
      <description>
        <![CDATA[<p>In this episode of BHIS Presents: AI Security Ops, we’re joined by Beau Bullock and Hayden Covington to unpack one of the most talked-about AI agent experiments in recent memory: OpenClaw and its companion platform, Moltbook.</p><p>OpenClaw exploded onto the scene as an autonomous AI agent capable of operating Claude Code from the command line — executing tasks, monitoring output, and iterating with minimal human involvement. Shortly after, Moltbook emerged as a social platform designed specifically for AI agents to interact with one another.</p><p>But as with most cutting-edge AI experiments, things moved fast… and broke fast.</p><p>We dig into:</p><ul><li>What OpenClaw actually is and how it works</li><li>AI agents operating other AI systems (Claude Code in the loop)</li><li>The concept of “skills” and extending agent capabilities</li><li>The one-click RCE vulnerability discovered shortly after release</li><li>Moltbook as a social network for AI agents</li><li>API keys, agent-only access, and how humans bypassed it</li><li>Beacons, autonomy, and what “control” really means</li><li>Where the line is between automation and true autonomy</li><li>Short-term workforce impacts vs. long-term AI risk</li></ul><p><br>This conversation moves beyond hype into the practical and security implications of rapidly deployed autonomous agents. If you’re experimenting with AI agents — or defending against them — this episode will give you a grounded perspective on what’s possible today, what’s fragile, and what’s coming next.</p><p><strong><ul><li>(00:00) - Intro &amp; Guest Welcome</li>
<li>(01:38) - AI Agents in the News</li>
<li>(03:23) - From “Moltbot” to OpenClaw</li>
<li>(04:13) - What Is OpenClaw? How It Works</li>
<li>(05:13) - Claude Code + Agent-in-the-Middle Model</li>
<li>(07:36) - Extending OpenClaw with Skills</li>
<li>(08:42) - Release Timeline &amp; Rapid Adoption</li>
<li>(10:16) - One-Click RCE in OpenClaw</li>
<li>(11:45) - Introducing Moltbook (AI Social Network)</li>
<li>(14:03) - How Moltbook Actually Worked</li>
<li>(17:55) - “I Am a Robot” &amp; Agent Authentication</li>
<li>(20:28) - Beaconing &amp; Operational Behavior</li>
<li>(26:44) - Automation vs. True Autonomy</li>
<li>(27:26) - Control, Kill Switches &amp; Agent Boundaries</li>
<li>(30:59) - Workforce Impact &amp; Near-Term Concerns</li>
<li>(35:34) - AI Apocalypse? Final Thoughts &amp; Wrap-Up</li>
</ul><br><a href="https://www.youtube.com/watch?v=pOBOZIrlqAY" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/beau-bullock">Beau Bullock</a> - Guest</li>
  <li><a href="https://aisecurityops.transistor.fm/people/hayden-covington">Hayden Covington</a> - Guest</li>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><strong><a href="https://share.transistor.fm/s/b50dad42/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</strong><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of BHIS Presents: AI Security Ops, we’re joined by Beau Bullock and Hayden Covington to unpack one of the most talked-about AI agent experiments in recent memory: OpenClaw and its companion platform, Moltbook.</p><p>OpenClaw exploded onto the scene as an autonomous AI agent capable of operating Claude Code from the command line — executing tasks, monitoring output, and iterating with minimal human involvement. Shortly after, Moltbook emerged as a social platform designed specifically for AI agents to interact with one another.</p><p>But as with most cutting-edge AI experiments, things moved fast… and broke fast.</p><p>We dig into:</p><ul><li>What OpenClaw actually is and how it works</li><li>AI agents operating other AI systems (Claude Code in the loop)</li><li>The concept of “skills” and extending agent capabilities</li><li>The one-click RCE vulnerability discovered shortly after release</li><li>Moltbook as a social network for AI agents</li><li>API keys, agent-only access, and how humans bypassed it</li><li>Beacons, autonomy, and what “control” really means</li><li>Where the line is between automation and true autonomy</li><li>Short-term workforce impacts vs. long-term AI risk</li></ul><p><br>This conversation moves beyond hype into the practical and security implications of rapidly deployed autonomous agents. If you’re experimenting with AI agents — or defending against them — this episode will give you a grounded perspective on what’s possible today, what’s fragile, and what’s coming next.</p><p><strong><ul><li>(00:00) - Intro &amp; Guest Welcome</li>
<li>(01:38) - AI Agents in the News</li>
<li>(03:23) - From “Moltbot” to OpenClaw</li>
<li>(04:13) - What Is OpenClaw? How It Works</li>
<li>(05:13) - Claude Code + Agent-in-the-Middle Model</li>
<li>(07:36) - Extending OpenClaw with Skills</li>
<li>(08:42) - Release Timeline &amp; Rapid Adoption</li>
<li>(10:16) - One-Click RCE in OpenClaw</li>
<li>(11:45) - Introducing Moltbook (AI Social Network)</li>
<li>(14:03) - How Moltbook Actually Worked</li>
<li>(17:55) - “I Am a Robot” &amp; Agent Authentication</li>
<li>(20:28) - Beaconing &amp; Operational Behavior</li>
<li>(26:44) - Automation vs. True Autonomy</li>
<li>(27:26) - Control, Kill Switches &amp; Agent Boundaries</li>
<li>(30:59) - Workforce Impact &amp; Near-Term Concerns</li>
<li>(35:34) - AI Apocalypse? Final Thoughts &amp; Wrap-Up</li>
</ul><br><a href="https://www.youtube.com/watch?v=pOBOZIrlqAY" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/beau-bullock">Beau Bullock</a> - Guest</li>
  <li><a href="https://aisecurityops.transistor.fm/people/hayden-covington">Hayden Covington</a> - Guest</li>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><strong><a href="https://share.transistor.fm/s/b50dad42/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</strong><br></p>]]>
      </content:encoded>
      <pubDate>Thu, 26 Feb 2026 12:00:00 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/b50dad42/e91ec0e6.mp3" length="34610201" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/2IWUa6HyBq-ErBg7hmgRid9-uuqMpiNT90qNf7Ry-qY/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lOGE2/ZjVlYmUwMDBiM2Rk/ZTFhNGRhN2I2ZGIw/OGE2ZS5wbmc.jpg"/>
      <itunes:duration>2160</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of BHIS Presents: AI Security Ops, we’re joined by Beau Bullock and Hayden Covington to unpack one of the most talked-about AI agent experiments in recent memory: OpenClaw and its companion platform, Moltbook.</p><p>OpenClaw exploded onto the scene as an autonomous AI agent capable of operating Claude Code from the command line — executing tasks, monitoring output, and iterating with minimal human involvement. Shortly after, Moltbook emerged as a social platform designed specifically for AI agents to interact with one another.</p><p>But as with most cutting-edge AI experiments, things moved fast… and broke fast.</p><p>We dig into:</p><ul><li>What OpenClaw actually is and how it works</li><li>AI agents operating other AI systems (Claude Code in the loop)</li><li>The concept of “skills” and extending agent capabilities</li><li>The one-click RCE vulnerability discovered shortly after release</li><li>Moltbook as a social network for AI agents</li><li>API keys, agent-only access, and how humans bypassed it</li><li>Beacons, autonomy, and what “control” really means</li><li>Where the line is between automation and true autonomy</li><li>Short-term workforce impacts vs. long-term AI risk</li></ul><p><br>This conversation moves beyond hype into the practical and security implications of rapidly deployed autonomous agents. If you’re experimenting with AI agents — or defending against them — this episode will give you a grounded perspective on what’s possible today, what’s fragile, and what’s coming next.</p><p><strong><ul><li>(00:00) - Intro &amp; Guest Welcome</li>
<li>(01:38) - AI Agents in the News</li>
<li>(03:23) - From “Moltbot” to OpenClaw</li>
<li>(04:13) - What Is OpenClaw? How It Works</li>
<li>(05:13) - Claude Code + Agent-in-the-Middle Model</li>
<li>(07:36) - Extending OpenClaw with Skills</li>
<li>(08:42) - Release Timeline &amp; Rapid Adoption</li>
<li>(10:16) - One-Click RCE in OpenClaw</li>
<li>(11:45) - Introducing Moltbook (AI Social Network)</li>
<li>(14:03) - How Moltbook Actually Worked</li>
<li>(17:55) - “I Am a Robot” &amp; Agent Authentication</li>
<li>(20:28) - Beaconing &amp; Operational Behavior</li>
<li>(26:44) - Automation vs. True Autonomy</li>
<li>(27:26) - Control, Kill Switches &amp; Agent Boundaries</li>
<li>(30:59) - Workforce Impact &amp; Near-Term Concerns</li>
<li>(35:34) - AI Apocalypse? Final Thoughts &amp; Wrap-Up</li>
</ul><br><a href="https://www.youtube.com/watch?v=pOBOZIrlqAY" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/beau-bullock">Beau Bullock</a> - Guest</li>
  <li><a href="https://aisecurityops.transistor.fm/people/hayden-covington">Hayden Covington</a> - Guest</li>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><strong><a href="https://share.transistor.fm/s/b50dad42/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</strong><br></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://aisecurityops.transistor.fm/people/beau-bullock" img="https://img.transistorcdn.com/go6QruAAD-oLSZ3AOmx_slIQKscGGCI7iR2q3JbsvSo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80ODll/MzkxMDI5MmE5ZDhm/Zjg1NTVmNWFkNzky/NzU4Ni5qcGc.jpg">Beau Bullock</podcast:person>
      <podcast:person role="Guest" href="https://www.blackhillsinfosec.com/team/hayden-covington/" img="https://img.transistorcdn.com/d9STmCQFFzEwTEfio3bG67-OiJ-RJ0UG7VY5mXdG-Cg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83OGQ0/MzlmNDYyZjU3YTFh/YmZkMjUxNTMxNjI4/OGE0ZC5qcGc.jpg">Hayden Covington</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/b50dad42/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/b50dad42/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/b50dad42/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/b50dad42/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/b50dad42/transcription" type="text/html"/>
      <podcast:chapters url="https://share.transistor.fm/s/b50dad42/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40</title>
      <itunes:episode>40</itunes:episode>
      <podcast:episode>40</podcast:episode>
      <itunes:title>AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">8ba30a6d-c386-45e3-9de0-08575133e9db</guid>
      <link>https://share.transistor.fm/s/b51cf022</link>
      <description>
        <![CDATA[<p>AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40</p><p>In this episode of BHIS Presents: AI Security Ops, we sit down with Hayden Covington and Ethan Robish from the BHIS Security Operations Center (SOC) to explore how AI is actually being used in modern defensive operations.</p><p>From foundational machine learning techniques like statistical baselining and clustering to large language models assisting with alert triage and reporting, we dig into what works, what doesn’t, and what SOC teams should realistically expect from AI today.</p><p>We break down:</p><p>- How AI helps reduce alert fatigue and improve triage<br>- Practical automation inside a real-world SOC<br>- The difference between traditional ML approaches and LLM-powered workflows<br>- Foundational techniques like K-means, anomaly detection, and behavioral baselining<br>- Using LLMs for enrichment, investigation, and report drafting<br>- Where AI struggles: hallucinations, inconsistency, and edge cases<br>- Risks around over-trusting AI in security operations<br>- How to responsibly integrate AI into analyst workflows</p><p>This episode is grounded in real operational experience—not vendor demos. If you’re running a SOC, building AI tooling, or just trying to separate hype from reality, this conversation will help you think clearly about augmentation vs. automation in defensive security.</p><p><br><strong><ul><li>(00:00) - Intro &amp; Guest Introductions</li>
<li>(04:44) - Alert Triage &amp; SOC Pain Points</li>
<li>(06:04) - Automation Inside the SOC</li>
<li>(09:59) - “Boring AI”: Clustering, Baselining &amp; Statistics</li>
<li>(17:06) - AI-Assisted Reporting &amp; Client Communication</li>
<li>(18:34) - Limitations, Edge Cases &amp; Model Risk</li>
<li>(22:56) - Hallucinations &amp; Inconsistent Outputs</li>
<li>(25:04) - AI Demos vs. Real-World Security Work</li>
<li>(28:35) - Final Thoughts &amp; Closing</li>
</ul><br><a href="https://www.youtube.com/watch?v=EqEJfs7J8Tk" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/hayden-covington">Hayden Covington</a> - Guest</li>
  <li><a href="https://aisecurityops.transistor.fm/people/ethan-robish">Ethan Robish</a> - Guest</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40</p><p>In this episode of BHIS Presents: AI Security Ops, we sit down with Hayden Covington and Ethan Robish from the BHIS Security Operations Center (SOC) to explore how AI is actually being used in modern defensive operations.</p><p>From foundational machine learning techniques like statistical baselining and clustering to large language models assisting with alert triage and reporting, we dig into what works, what doesn’t, and what SOC teams should realistically expect from AI today.</p><p>We break down:</p><p>- How AI helps reduce alert fatigue and improve triage<br>- Practical automation inside a real-world SOC<br>- The difference between traditional ML approaches and LLM-powered workflows<br>- Foundational techniques like K-means, anomaly detection, and behavioral baselining<br>- Using LLMs for enrichment, investigation, and report drafting<br>- Where AI struggles: hallucinations, inconsistency, and edge cases<br>- Risks around over-trusting AI in security operations<br>- How to responsibly integrate AI into analyst workflows</p><p>This episode is grounded in real operational experience—not vendor demos. If you’re running a SOC, building AI tooling, or just trying to separate hype from reality, this conversation will help you think clearly about augmentation vs. automation in defensive security.</p><p><br><strong><ul><li>(00:00) - Intro &amp; Guest Introductions</li>
<li>(04:44) - Alert Triage &amp; SOC Pain Points</li>
<li>(06:04) - Automation Inside the SOC</li>
<li>(09:59) - “Boring AI”: Clustering, Baselining &amp; Statistics</li>
<li>(17:06) - AI-Assisted Reporting &amp; Client Communication</li>
<li>(18:34) - Limitations, Edge Cases &amp; Model Risk</li>
<li>(22:56) - Hallucinations &amp; Inconsistent Outputs</li>
<li>(25:04) - AI Demos vs. Real-World Security Work</li>
<li>(28:35) - Final Thoughts &amp; Closing</li>
</ul><br><a href="https://www.youtube.com/watch?v=EqEJfs7J8Tk" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/hayden-covington">Hayden Covington</a> - Guest</li>
  <li><a href="https://aisecurityops.transistor.fm/people/ethan-robish">Ethan Robish</a> - Guest</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p>]]>
      </content:encoded>
      <pubDate>Fri, 20 Feb 2026 09:39:57 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/b51cf022/cc5b7859.mp3" length="28774435" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/3pMli-szVz_Oys4-UmR8e0o9ZIIIm63mby_SgtmqERU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mYjRk/YTg1YzU2ZWFjMDE1/MzA4ZDhhODE0NjVm/NTQ2ZC5wbmc.jpg"/>
      <itunes:duration>1768</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40</p><p>In this episode of BHIS Presents: AI Security Ops, we sit down with Hayden Covington and Ethan Robish from the BHIS Security Operations Center (SOC) to explore how AI is actually being used in modern defensive operations.</p><p>From foundational machine learning techniques like statistical baselining and clustering to large language models assisting with alert triage and reporting, we dig into what works, what doesn’t, and what SOC teams should realistically expect from AI today.</p><p>We break down:</p><p>- How AI helps reduce alert fatigue and improve triage<br>- Practical automation inside a real-world SOC<br>- The difference between traditional ML approaches and LLM-powered workflows<br>- Foundational techniques like K-means, anomaly detection, and behavioral baselining<br>- Using LLMs for enrichment, investigation, and report drafting<br>- Where AI struggles: hallucinations, inconsistency, and edge cases<br>- Risks around over-trusting AI in security operations<br>- How to responsibly integrate AI into analyst workflows</p><p>This episode is grounded in real operational experience—not vendor demos. If you’re running a SOC, building AI tooling, or just trying to separate hype from reality, this conversation will help you think clearly about augmentation vs. automation in defensive security.</p><p><br><strong><ul><li>(00:00) - Intro &amp; Guest Introductions</li>
<li>(04:44) - Alert Triage &amp; SOC Pain Points</li>
<li>(06:04) - Automation Inside the SOC</li>
<li>(09:59) - “Boring AI”: Clustering, Baselining &amp; Statistics</li>
<li>(17:06) - AI-Assisted Reporting &amp; Client Communication</li>
<li>(18:34) - Limitations, Edge Cases &amp; Model Risk</li>
<li>(22:56) - Hallucinations &amp; Inconsistent Outputs</li>
<li>(25:04) - AI Demos vs. Real-World Security Work</li>
<li>(28:35) - Final Thoughts &amp; Closing</li>
</ul><br><a href="https://www.youtube.com/watch?v=EqEJfs7J8Tk" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/hayden-covington">Hayden Covington</a> - Guest</li>
  <li><a href="https://aisecurityops.transistor.fm/people/ethan-robish">Ethan Robish</a> - Guest</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
</ul></strong><br>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://www.blackhillsinfosec.com/team/hayden-covington/" img="https://img.transistorcdn.com/d9STmCQFFzEwTEfio3bG67-OiJ-RJ0UG7VY5mXdG-Cg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83OGQ0/MzlmNDYyZjU3YTFh/YmZkMjUxNTMxNjI4/OGE0ZC5qcGc.jpg">Hayden Covington</podcast:person>
      <podcast:person role="Guest" href="https://www.blackhillsinfosec.com/team/ethan-robish/" img="https://img.transistorcdn.com/QuhUiO37j_o04xKLYPo-iH3dl9Kd1mdh6Z-0bnHwGz0/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81N2U3/ZDlmNTkxYTgzMjdl/YWQ0MGJjZWMwZGVj/NzkyMC5qcGc.jpg">Ethan Robish</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/b51cf022/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/b51cf022/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/b51cf022/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/b51cf022/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/b51cf022/transcription" type="text/html"/>
      <podcast:chapters url="https://share.transistor.fm/s/b51cf022/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>AI News | Episode 39</title>
      <itunes:episode>38</itunes:episode>
      <podcast:episode>38</podcast:episode>
      <itunes:title>AI News | Episode 39</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fd3770b0-95b0-44ac-828b-3e903b9b62d0</guid>
      <link>https://share.transistor.fm/s/139934fb</link>
      <description>
        <![CDATA[<p>AI News | Episode 39</p><p>In this episode of AI Security Ops, we break down the latest developments in AI-driven threats, identity chaos caused by autonomous agents, NIST’s focus on securing AI in critical infrastructure, and new visibility tooling for AI exposure.</p><p>We cover real-world abuse of LLMs for phishing, how AI agents are colliding with IAM governance, and what defenders should be watching right now.</p><p>Chapters:<br>00:00 – Introduction and Sponsors<br>Black Hills Information Security - <a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com/</a><br>Antisyphon Training - <a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p>01:08 – LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto)<br>Discussion begins as the hosts introduce the first story.<br>How LLMs are generating polymorphic malicious JavaScript for phishing pages and evading traditional detection.<br>👉<a href="https://unit42.paloaltonetworks.com/real-time-malicious-javascript-through-llms/"> https://unit42.paloaltonetworks.com/real-time-malicious-javascript-through-llms/</a></p><p>08:49 – AI Agents vs IAM: “Who Approved This Agent?” (Hacker News)<br>Conversation shifts to agent privilege management and governance failures.<br>👉 <a href="https://thehackernews.com/2026/01/who-approved-this-agent-rethinking.html">https://thehackernews.com/2026/01/who-approved-this-agent-rethinking.html</a></p><p>10:07 – NIST Focus on Securing AI Agents in Critical Infrastructure<br>Discussion on federal guidance and why AI agents are being treated as critical infrastructure risk components.<br>👉 <a href="https://www.linkedin.com/pulse/cybersecurity-institute-news-roundup-20-january-2026-entrust-alz7c">https://www.linkedin.com/pulse/cybersecurity-institute-news-roundup-20-january-2026-entrust-alz7c</a></p><p>13:44 – Tenable One AI Exposure<br>Breaking down Tenable’s push into enterprise AI usage visibility and exposure management.<br>👉 <a href="https://www.tenable.com/blog/tenable-one-ai-exposure-secure-ai-usage-at-scale">https://www.tenable.com/blog/tenable-one-ai-exposure-secure-ai-usage-at-scale</a></p><p><br>Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. <br><a href="https://discord.gg/bhis">https://discord.gg/bhis</a></p><p>Chapters<br></p><ul><li>(00:00) - Introduction and Sponsors</li>
<li>(01:08) - LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto)</li>
<li>(10:07) - NIST Focus on Securing AI Agents in Critical Infrastructure</li>
<li>(13:44) - Tenable One AI Exposure</li>
</ul><br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
</ul><br><a href="https://www.youtube.com/watch?v=GdNbJZFZV50" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br>----------------------------------------------------------------------------------------------<br>About Joff Thyer - https://www.blackhillsinfosec.com/team/joff-thyer/<br>About Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/<br>About Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/<br>About Bronwen Aker - https://www.blackhillsinfosec.com/team/bronwen-aker/<br>About Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/<p>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><a href="https://share.transistor.fm/s/139934fb/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>AI News | Episode 39</p><p>In this episode of AI Security Ops, we break down the latest developments in AI-driven threats, identity chaos caused by autonomous agents, NIST’s focus on securing AI in critical infrastructure, and new visibility tooling for AI exposure.</p><p>We cover real-world abuse of LLMs for phishing, how AI agents are colliding with IAM governance, and what defenders should be watching right now.</p><p>Chapters:<br>00:00 – Introduction and Sponsors<br>Black Hills Information Security - <a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com/</a><br>Antisyphon Training - <a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p>01:08 – LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto)<br>Discussion begins as the hosts introduce the first story.<br>How LLMs are generating polymorphic malicious JavaScript for phishing pages and evading traditional detection.<br>👉<a href="https://unit42.paloaltonetworks.com/real-time-malicious-javascript-through-llms/"> https://unit42.paloaltonetworks.com/real-time-malicious-javascript-through-llms/</a></p><p>08:49 – AI Agents vs IAM: “Who Approved This Agent?” (Hacker News)<br>Conversation shifts to agent privilege management and governance failures.<br>👉 <a href="https://thehackernews.com/2026/01/who-approved-this-agent-rethinking.html">https://thehackernews.com/2026/01/who-approved-this-agent-rethinking.html</a></p><p>10:07 – NIST Focus on Securing AI Agents in Critical Infrastructure<br>Discussion on federal guidance and why AI agents are being treated as critical infrastructure risk components.<br>👉 <a href="https://www.linkedin.com/pulse/cybersecurity-institute-news-roundup-20-january-2026-entrust-alz7c">https://www.linkedin.com/pulse/cybersecurity-institute-news-roundup-20-january-2026-entrust-alz7c</a></p><p>13:44 – Tenable One AI Exposure<br>Breaking down Tenable’s push into enterprise AI usage visibility and exposure management.<br>👉 <a href="https://www.tenable.com/blog/tenable-one-ai-exposure-secure-ai-usage-at-scale">https://www.tenable.com/blog/tenable-one-ai-exposure-secure-ai-usage-at-scale</a></p><p><br>Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. <br><a href="https://discord.gg/bhis">https://discord.gg/bhis</a></p><p>Chapters<br></p><ul><li>(00:00) - Introduction and Sponsors</li>
<li>(01:08) - LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto)</li>
<li>(10:07) - NIST Focus on Securing AI Agents in Critical Infrastructure</li>
<li>(13:44) - Tenable One AI Exposure</li>
</ul><br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
</ul><br><a href="https://www.youtube.com/watch?v=GdNbJZFZV50" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br>----------------------------------------------------------------------------------------------<br>About Joff Thyer - https://www.blackhillsinfosec.com/team/joff-thyer/<br>About Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/<br>About Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/<br>About Bronwen Aker - https://www.blackhillsinfosec.com/team/bronwen-aker/<br>About Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/<p>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><a href="https://share.transistor.fm/s/139934fb/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</p>]]>
      </content:encoded>
      <pubDate>Thu, 12 Feb 2026 08:00:00 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/139934fb/802d72dd.mp3" length="17454786" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/NjtVXspFhtAhpFEFZT-DELDwH6hLe9mR53B1bLEQsgM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83ZjJh/ZmVjNjE0NmJjNTAw/YWRmN2UzZjMzYzIw/YTM3Zi5wbmc.jpg"/>
      <itunes:duration>1088</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>AI News | Episode 39</p><p>In this episode of AI Security Ops, we break down the latest developments in AI-driven threats, identity chaos caused by autonomous agents, NIST’s focus on securing AI in critical infrastructure, and new visibility tooling for AI exposure.</p><p>We cover real-world abuse of LLMs for phishing, how AI agents are colliding with IAM governance, and what defenders should be watching right now.</p><p>Chapters:<br>00:00 – Introduction and Sponsors<br>Black Hills Information Security - <a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com/</a><br>Antisyphon Training - <a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p>01:08 – LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto)<br>Discussion begins as the hosts introduce the first story.<br>How LLMs are generating polymorphic malicious JavaScript for phishing pages and evading traditional detection.<br>👉<a href="https://unit42.paloaltonetworks.com/real-time-malicious-javascript-through-llms/"> https://unit42.paloaltonetworks.com/real-time-malicious-javascript-through-llms/</a></p><p>08:49 – AI Agents vs IAM: “Who Approved This Agent?” (Hacker News)<br>Conversation shifts to agent privilege management and governance failures.<br>👉 <a href="https://thehackernews.com/2026/01/who-approved-this-agent-rethinking.html">https://thehackernews.com/2026/01/who-approved-this-agent-rethinking.html</a></p><p>10:07 – NIST Focus on Securing AI Agents in Critical Infrastructure<br>Discussion on federal guidance and why AI agents are being treated as critical infrastructure risk components.<br>👉 <a href="https://www.linkedin.com/pulse/cybersecurity-institute-news-roundup-20-january-2026-entrust-alz7c">https://www.linkedin.com/pulse/cybersecurity-institute-news-roundup-20-january-2026-entrust-alz7c</a></p><p>13:44 – Tenable One AI Exposure<br>Breaking down Tenable’s push into enterprise AI usage visibility and exposure management.<br>👉 <a href="https://www.tenable.com/blog/tenable-one-ai-exposure-secure-ai-usage-at-scale">https://www.tenable.com/blog/tenable-one-ai-exposure-secure-ai-usage-at-scale</a></p><p><br>Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. <br><a href="https://discord.gg/bhis">https://discord.gg/bhis</a></p><p>Chapters<br></p><ul><li>(00:00) - Introduction and Sponsors</li>
<li>(01:08) - LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto)</li>
<li>(10:07) - NIST Focus on Securing AI Agents in Critical Infrastructure</li>
<li>(13:44) - Tenable One AI Exposure</li>
</ul><br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
</ul><br><a href="https://www.youtube.com/watch?v=GdNbJZFZV50" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br>----------------------------------------------------------------------------------------------<br>About Joff Thyer - https://www.blackhillsinfosec.com/team/joff-thyer/<br>About Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/<br>About Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/<br>About Bronwen Aker - https://www.blackhillsinfosec.com/team/bronwen-aker/<br>About Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/<p>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><a href="https://share.transistor.fm/s/139934fb/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/139934fb/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/139934fb/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/139934fb/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/139934fb/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/139934fb/transcription" type="text/html"/>
      <podcast:chapters url="https://share.transistor.fm/s/139934fb/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Questions From the Community | Episode 38</title>
      <itunes:episode>39</itunes:episode>
      <podcast:episode>39</podcast:episode>
      <itunes:title>Questions From the Community | Episode 38</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6f8d619b-e376-4757-9973-0bec369bc811</guid>
      <link>https://share.transistor.fm/s/9c40ade5</link>
      <description>
        <![CDATA[<p><strong><br><a href="https://www.youtube.com/watch?v=-fe-pd9Nc5E" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/joff-thyer">Joff Thyer</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
</ul></strong></p><p>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><a href="https://share.transistor.fm/s/9c40ade5/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><strong><br><a href="https://www.youtube.com/watch?v=-fe-pd9Nc5E" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/joff-thyer">Joff Thyer</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
</ul></strong></p><p>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><a href="https://share.transistor.fm/s/9c40ade5/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</p>]]>
      </content:encoded>
      <pubDate>Thu, 05 Feb 2026 06:00:00 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/9c40ade5/d2aafe2f.mp3" length="16403035" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/trPzJgIosNcupFCiDyeNQtLtQuf12bkUjy2UXScCLas/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZTlk/MjY3MWM3NGIwNDM1/YTM0ZjQ1MzEzNzUz/ZDZiZi5wbmc.jpg"/>
      <itunes:duration>995</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><strong><br><a href="https://www.youtube.com/watch?v=-fe-pd9Nc5E" title="Click here to watch this episode on YouTube.">Click here to watch this episode on YouTube.</a><br>
<br><strong>Creators &amp; Guests</strong>
<ul>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/joff-thyer">Joff Thyer</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
</ul></strong></p><p>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p><p><a href="https://share.transistor.fm/s/9c40ade5/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
</p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/9c40ade5/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/9c40ade5/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/9c40ade5/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/9c40ade5/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/9c40ade5/transcription" type="text/html"/>
    </item>
    <item>
      <title>A.I. Frameworks and Databases | Episode 37</title>
      <itunes:episode>37</itunes:episode>
      <podcast:episode>37</podcast:episode>
      <itunes:title>A.I. Frameworks and Databases | Episode 37</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4a233281-6b51-4d3b-a17d-d04eea9758c9</guid>
      <link>https://share.transistor.fm/s/810ce64d</link>
      <description>
        <![CDATA[<p>In Episode 37 of AI Security Ops, the team breaks down the most important AI security frameworks and vulnerability databases used to track risks in machine learning and large language models. The discussion covers emerging AI vulnerability databases, the OWASP Top 10 for LLMs, CVE challenges, and frameworks like MITRE ATLAS, highlighting why standardizing AI threats is still difficult. This episode is a practical guide for security professionals looking to stay ahead of AI vulnerabilities, attack techniques, and defensive resources in a fast-moving landscape.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Episode 37 – AI Frameworks &amp; Databases</li>
<li>(01:39) - A.I. vulnerability tracking is still young</li>
<li>(02:44) - Should A.I. get its own vulnerability database?</li>
<li>(07:33) - The benefit of multiple vulnerability databases</li>
<li>(15:58) - The what is the definition of a vulnerability?</li>
<li>(17:54) - Final Thoughts</li>
</ul><br>Brought to you by:<p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In Episode 37 of AI Security Ops, the team breaks down the most important AI security frameworks and vulnerability databases used to track risks in machine learning and large language models. The discussion covers emerging AI vulnerability databases, the OWASP Top 10 for LLMs, CVE challenges, and frameworks like MITRE ATLAS, highlighting why standardizing AI threats is still difficult. This episode is a practical guide for security professionals looking to stay ahead of AI vulnerabilities, attack techniques, and defensive resources in a fast-moving landscape.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Episode 37 – AI Frameworks &amp; Databases</li>
<li>(01:39) - A.I. vulnerability tracking is still young</li>
<li>(02:44) - Should A.I. get its own vulnerability database?</li>
<li>(07:33) - The benefit of multiple vulnerability databases</li>
<li>(15:58) - The what is the definition of a vulnerability?</li>
<li>(17:54) - Final Thoughts</li>
</ul><br>Brought to you by:<p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p>]]>
      </content:encoded>
      <pubDate>Fri, 30 Jan 2026 15:04:06 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/810ce64d/9a83bbc5.mp3" length="18926455" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Ff2BTSHIsb0lwv9uJsQdudTvu3EAkP3zbrTpQhmTMkg/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ODA0/ZjI5NzA5MTA4NTI2/MTQ4MjYyOGEyMmQ3/NjViNC5wbmc.jpg"/>
      <itunes:duration>1130</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In Episode 37 of AI Security Ops, the team breaks down the most important AI security frameworks and vulnerability databases used to track risks in machine learning and large language models. The discussion covers emerging AI vulnerability databases, the OWASP Top 10 for LLMs, CVE challenges, and frameworks like MITRE ATLAS, highlighting why standardizing AI threats is still difficult. This episode is a practical guide for security professionals looking to stay ahead of AI vulnerabilities, attack techniques, and defensive resources in a fast-moving landscape.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Episode 37 – AI Frameworks &amp; Databases</li>
<li>(01:39) - A.I. vulnerability tracking is still young</li>
<li>(02:44) - Should A.I. get its own vulnerability database?</li>
<li>(07:33) - The benefit of multiple vulnerability databases</li>
<li>(15:58) - The what is the definition of a vulnerability?</li>
<li>(17:54) - Final Thoughts</li>
</ul><br>Brought to you by:<p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a> </p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/810ce64d/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/810ce64d/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/810ce64d/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/810ce64d/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/810ce64d/transcription" type="text/html"/>
      <podcast:chapters url="https://share.transistor.fm/s/810ce64d/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>AI News Stories | Episode 36</title>
      <itunes:episode>36</itunes:episode>
      <podcast:episode>36</podcast:episode>
      <itunes:title>AI News Stories | Episode 36</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ca293f30-b2ce-4e67-86a4-e017925f6e84</guid>
      <link>https://share.transistor.fm/s/3a816e83</link>
      <description>
        <![CDATA[<p>This week on <strong>AI Security Ops</strong>, the team breaks down how attackers are weaponizing AI and the tools around it: a critical <strong>n8n</strong> zero-day that can lead to unauthenticated remote code execution, <strong>prompt-injection “zombie agent”</strong> risks tied to ChatGPT memory, a <strong>zero-click-style indirect prompt injection </strong>scenario via email/URLs, and <strong>malicious Chrome extensions</strong> caught siphoning ChatGPT/DeepSeek chats at scale. They close with a reminder that the tactics are often “same old security problems,” just amplified by AI—so lock down orchestration, limit browser extensions, and keep sensitive data out of chat tools.</p><p>Key stories discussed</p><p><strong>1) n8n (“n-eight-n”) zero-day → unauthenticated RCE risk</strong></p><ul><li><a href="https://thehackernews.com/2026/01/critical-n8n-vulnerability-cvss-100.html">https://thehackernews.com/2026/01/critical-n8n-vulnerability-cvss-100.html</a></li><li>The hosts discuss a critical flaw in the <strong>n8n</strong> workflow automation platform where a workflow-parsing HTTP endpoint can be abused (via a crafted JSON payload) to achieve <strong>remote code execution</strong> as the n8n service account. Because automation/orchestration platforms often have broad internal access, one compromise can cascade quickly across an organization’s automation layer. <br>ai-news-stories-episode-36</li><li>Practical takeaway: don’t expose orchestration platforms directly to the internet; restrict who/what can talk to them; treat these “glue” systems as high-impact targets and assess them like any other production system. <br>ai-news-stories-episode-36</li></ul><p><strong>2) “Zombie agent” prompt injection via ChatGPT Memory</strong></p><ul><li><a href="https://www.darkreading.com/endpoint-security/chatgpt-memory-feature-prompt-injection">https://www.darkreading.com/endpoint-security/chatgpt-memory-feature-prompt-injection</a></li><li>The team talks about research describing an exploit that <strong>stores malicious instructions in long-term memory</strong>, then later triggers them with a benign prompt—leading to potential data leakage or unsafe tool actions if the model has integrations. The discussion frames this as “stored XSS vibes,” but harder to solve because the “feature” (following instructions/context) is also the root problem. <br>ai-news-stories-episode-36</li><li>User-side mitigation themes: consider disabling memory, keep chats cleaned up, and avoid putting sensitive data into chat tools—especially when agents/tools are involved. <br>ai-news-stories-episode-36</li></ul><p><strong>3) “Zero-click” agentic abuse via crafted email/URL (indirect prompt injection)</strong></p><ul><li><a href="https://www.infosecurity-magazine.com/news/new-zeroclick-attack-chatgpt/">https://www.infosecurity-magazine.com/news/new-zeroclick-attack-chatgpt/</a></li><li>Another story describes a <strong>crafted URL delivered via email</strong> that could trigger an agentic workflow (e.g., email summarization / agent actions) to export chat logs without explicit user interaction. The hosts largely interpret this as <strong>indirect prompt injection</strong>—a pattern they expect to keep seeing as assistants gain more connectivity. <br>ai-news-stories-episode-36</li><li>Key point: even if the exact implementation varies, auto-processing untrusted content (like email) is a persistent risk when the model can take actions or access history. <br>ai-news-stories-episode-36</li></ul><p><strong>4) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (900k users)</strong></p><ul><li><a href="https://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.html">https://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.html</a></li><li>Two Chrome extensions posing as AI productivity tools reportedly <strong>injected JavaScript into AI web UIs</strong>, scraped chat text from the DOM, and exfiltrated it—highlighting ongoing <strong>extension supply-chain</strong> risk and the reality that “approved store” doesn’t mean safe. <br>ai-news-stories-episode-36</li><li>Advice echoed: minimize extensions, separate browsers/profiles for sensitive activities, and treat “AI sidebar” tools with extra skepticism. <br>ai-news-stories-episode-36</li></ul><p><strong>5) APT28 credential phishing updated with AI-written lures</strong></p><ul><li><a href="https://thehackernews.com/2026/01/russian-apt28-runs-credential-stealing.html">https://thehackernews.com/2026/01/russian-apt28-runs-credential-stealing.html</a></li><li>The closing story is a familiar APT pattern—phishing emails with malicious Office docs leading to PowerShell loaders and credential theft—except the lure text is <strong>AI-generated</strong>, making it more consistent/convincing (and harder for users to spot via grammar/tone). <br>ai-news-stories-episode-36</li><li>The conversation stresses that “don’t click links” guidance is oversimplified; verification and layered controls matter (e.g., disabling macros org-wide). <br>ai-news-stories-episode-36</li></ul><p><strong>Chapter Timestamps</strong><br></p><ul><li>(00:00) - Intro &amp; Sponsors</li>
<li>(01:16) - 1) n8n zero-day → unauthenticated RCE</li>
<li>(09:00) - 2) “Zombie agent” prompt injection via ChatGPT Memory</li>
<li>(19:52) - 3) “Zero-click” style agent abuse via crafted email/URL (indirect prompt injection)</li>
<li>(23:41) - 4) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (~900k users)</li>
<li>(29:59) - 5) APT28 phishing refreshed with AI-written lures</li>
<li>(34:15) - Closing thoughts: “AI genie is out of the bottle” + safety reminders</li>
</ul><p><a href="https://www.youtube.com/watch?v=bfJWxDvQPnE" title="Click here to watch a video of this episode.">Click here to watch a video of this episode.</a><br>
<strong>Creators &amp; Guests</strong>
</p><ul>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
</ul><br><p>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com">https://poweredbybhis.com</a> </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week on <strong>AI Security Ops</strong>, the team breaks down how attackers are weaponizing AI and the tools around it: a critical <strong>n8n</strong> zero-day that can lead to unauthenticated remote code execution, <strong>prompt-injection “zombie agent”</strong> risks tied to ChatGPT memory, a <strong>zero-click-style indirect prompt injection </strong>scenario via email/URLs, and <strong>malicious Chrome extensions</strong> caught siphoning ChatGPT/DeepSeek chats at scale. They close with a reminder that the tactics are often “same old security problems,” just amplified by AI—so lock down orchestration, limit browser extensions, and keep sensitive data out of chat tools.</p><p>Key stories discussed</p><p><strong>1) n8n (“n-eight-n”) zero-day → unauthenticated RCE risk</strong></p><ul><li><a href="https://thehackernews.com/2026/01/critical-n8n-vulnerability-cvss-100.html">https://thehackernews.com/2026/01/critical-n8n-vulnerability-cvss-100.html</a></li><li>The hosts discuss a critical flaw in the <strong>n8n</strong> workflow automation platform where a workflow-parsing HTTP endpoint can be abused (via a crafted JSON payload) to achieve <strong>remote code execution</strong> as the n8n service account. Because automation/orchestration platforms often have broad internal access, one compromise can cascade quickly across an organization’s automation layer. <br>ai-news-stories-episode-36</li><li>Practical takeaway: don’t expose orchestration platforms directly to the internet; restrict who/what can talk to them; treat these “glue” systems as high-impact targets and assess them like any other production system. <br>ai-news-stories-episode-36</li></ul><p><strong>2) “Zombie agent” prompt injection via ChatGPT Memory</strong></p><ul><li><a href="https://www.darkreading.com/endpoint-security/chatgpt-memory-feature-prompt-injection">https://www.darkreading.com/endpoint-security/chatgpt-memory-feature-prompt-injection</a></li><li>The team talks about research describing an exploit that <strong>stores malicious instructions in long-term memory</strong>, then later triggers them with a benign prompt—leading to potential data leakage or unsafe tool actions if the model has integrations. The discussion frames this as “stored XSS vibes,” but harder to solve because the “feature” (following instructions/context) is also the root problem. <br>ai-news-stories-episode-36</li><li>User-side mitigation themes: consider disabling memory, keep chats cleaned up, and avoid putting sensitive data into chat tools—especially when agents/tools are involved. <br>ai-news-stories-episode-36</li></ul><p><strong>3) “Zero-click” agentic abuse via crafted email/URL (indirect prompt injection)</strong></p><ul><li><a href="https://www.infosecurity-magazine.com/news/new-zeroclick-attack-chatgpt/">https://www.infosecurity-magazine.com/news/new-zeroclick-attack-chatgpt/</a></li><li>Another story describes a <strong>crafted URL delivered via email</strong> that could trigger an agentic workflow (e.g., email summarization / agent actions) to export chat logs without explicit user interaction. The hosts largely interpret this as <strong>indirect prompt injection</strong>—a pattern they expect to keep seeing as assistants gain more connectivity. <br>ai-news-stories-episode-36</li><li>Key point: even if the exact implementation varies, auto-processing untrusted content (like email) is a persistent risk when the model can take actions or access history. <br>ai-news-stories-episode-36</li></ul><p><strong>4) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (900k users)</strong></p><ul><li><a href="https://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.html">https://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.html</a></li><li>Two Chrome extensions posing as AI productivity tools reportedly <strong>injected JavaScript into AI web UIs</strong>, scraped chat text from the DOM, and exfiltrated it—highlighting ongoing <strong>extension supply-chain</strong> risk and the reality that “approved store” doesn’t mean safe. <br>ai-news-stories-episode-36</li><li>Advice echoed: minimize extensions, separate browsers/profiles for sensitive activities, and treat “AI sidebar” tools with extra skepticism. <br>ai-news-stories-episode-36</li></ul><p><strong>5) APT28 credential phishing updated with AI-written lures</strong></p><ul><li><a href="https://thehackernews.com/2026/01/russian-apt28-runs-credential-stealing.html">https://thehackernews.com/2026/01/russian-apt28-runs-credential-stealing.html</a></li><li>The closing story is a familiar APT pattern—phishing emails with malicious Office docs leading to PowerShell loaders and credential theft—except the lure text is <strong>AI-generated</strong>, making it more consistent/convincing (and harder for users to spot via grammar/tone). <br>ai-news-stories-episode-36</li><li>The conversation stresses that “don’t click links” guidance is oversimplified; verification and layered controls matter (e.g., disabling macros org-wide). <br>ai-news-stories-episode-36</li></ul><p><strong>Chapter Timestamps</strong><br></p><ul><li>(00:00) - Intro &amp; Sponsors</li>
<li>(01:16) - 1) n8n zero-day → unauthenticated RCE</li>
<li>(09:00) - 2) “Zombie agent” prompt injection via ChatGPT Memory</li>
<li>(19:52) - 3) “Zero-click” style agent abuse via crafted email/URL (indirect prompt injection)</li>
<li>(23:41) - 4) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (~900k users)</li>
<li>(29:59) - 5) APT28 phishing refreshed with AI-written lures</li>
<li>(34:15) - Closing thoughts: “AI genie is out of the bottle” + safety reminders</li>
</ul><p><a href="https://www.youtube.com/watch?v=bfJWxDvQPnE" title="Click here to watch a video of this episode.">Click here to watch a video of this episode.</a><br>
<strong>Creators &amp; Guests</strong>
</p><ul>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
</ul><br><p>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com">https://poweredbybhis.com</a> </p>]]>
      </content:encoded>
      <pubDate>Thu, 22 Jan 2026 16:03:10 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/3a816e83/dc099d41.mp3" length="33899547" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/VFUXTRCizlhR1XlPQPpz-8OplwhI4UZwl9gVP2ZOjsA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yMjM4/OTE0MTBhZjIzZGVk/Yzk3YzQxZjQ1ODI0/ZDJiYS5wbmc.jpg"/>
      <itunes:duration>2116</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week on <strong>AI Security Ops</strong>, the team breaks down how attackers are weaponizing AI and the tools around it: a critical <strong>n8n</strong> zero-day that can lead to unauthenticated remote code execution, <strong>prompt-injection “zombie agent”</strong> risks tied to ChatGPT memory, a <strong>zero-click-style indirect prompt injection </strong>scenario via email/URLs, and <strong>malicious Chrome extensions</strong> caught siphoning ChatGPT/DeepSeek chats at scale. They close with a reminder that the tactics are often “same old security problems,” just amplified by AI—so lock down orchestration, limit browser extensions, and keep sensitive data out of chat tools.</p><p>Key stories discussed</p><p><strong>1) n8n (“n-eight-n”) zero-day → unauthenticated RCE risk</strong></p><ul><li><a href="https://thehackernews.com/2026/01/critical-n8n-vulnerability-cvss-100.html">https://thehackernews.com/2026/01/critical-n8n-vulnerability-cvss-100.html</a></li><li>The hosts discuss a critical flaw in the <strong>n8n</strong> workflow automation platform where a workflow-parsing HTTP endpoint can be abused (via a crafted JSON payload) to achieve <strong>remote code execution</strong> as the n8n service account. Because automation/orchestration platforms often have broad internal access, one compromise can cascade quickly across an organization’s automation layer. <br>ai-news-stories-episode-36</li><li>Practical takeaway: don’t expose orchestration platforms directly to the internet; restrict who/what can talk to them; treat these “glue” systems as high-impact targets and assess them like any other production system. <br>ai-news-stories-episode-36</li></ul><p><strong>2) “Zombie agent” prompt injection via ChatGPT Memory</strong></p><ul><li><a href="https://www.darkreading.com/endpoint-security/chatgpt-memory-feature-prompt-injection">https://www.darkreading.com/endpoint-security/chatgpt-memory-feature-prompt-injection</a></li><li>The team talks about research describing an exploit that <strong>stores malicious instructions in long-term memory</strong>, then later triggers them with a benign prompt—leading to potential data leakage or unsafe tool actions if the model has integrations. The discussion frames this as “stored XSS vibes,” but harder to solve because the “feature” (following instructions/context) is also the root problem. <br>ai-news-stories-episode-36</li><li>User-side mitigation themes: consider disabling memory, keep chats cleaned up, and avoid putting sensitive data into chat tools—especially when agents/tools are involved. <br>ai-news-stories-episode-36</li></ul><p><strong>3) “Zero-click” agentic abuse via crafted email/URL (indirect prompt injection)</strong></p><ul><li><a href="https://www.infosecurity-magazine.com/news/new-zeroclick-attack-chatgpt/">https://www.infosecurity-magazine.com/news/new-zeroclick-attack-chatgpt/</a></li><li>Another story describes a <strong>crafted URL delivered via email</strong> that could trigger an agentic workflow (e.g., email summarization / agent actions) to export chat logs without explicit user interaction. The hosts largely interpret this as <strong>indirect prompt injection</strong>—a pattern they expect to keep seeing as assistants gain more connectivity. <br>ai-news-stories-episode-36</li><li>Key point: even if the exact implementation varies, auto-processing untrusted content (like email) is a persistent risk when the model can take actions or access history. <br>ai-news-stories-episode-36</li></ul><p><strong>4) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (900k users)</strong></p><ul><li><a href="https://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.html">https://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.html</a></li><li>Two Chrome extensions posing as AI productivity tools reportedly <strong>injected JavaScript into AI web UIs</strong>, scraped chat text from the DOM, and exfiltrated it—highlighting ongoing <strong>extension supply-chain</strong> risk and the reality that “approved store” doesn’t mean safe. <br>ai-news-stories-episode-36</li><li>Advice echoed: minimize extensions, separate browsers/profiles for sensitive activities, and treat “AI sidebar” tools with extra skepticism. <br>ai-news-stories-episode-36</li></ul><p><strong>5) APT28 credential phishing updated with AI-written lures</strong></p><ul><li><a href="https://thehackernews.com/2026/01/russian-apt28-runs-credential-stealing.html">https://thehackernews.com/2026/01/russian-apt28-runs-credential-stealing.html</a></li><li>The closing story is a familiar APT pattern—phishing emails with malicious Office docs leading to PowerShell loaders and credential theft—except the lure text is <strong>AI-generated</strong>, making it more consistent/convincing (and harder for users to spot via grammar/tone). <br>ai-news-stories-episode-36</li><li>The conversation stresses that “don’t click links” guidance is oversimplified; verification and layered controls matter (e.g., disabling macros org-wide). <br>ai-news-stories-episode-36</li></ul><p><strong>Chapter Timestamps</strong><br></p><ul><li>(00:00) - Intro &amp; Sponsors</li>
<li>(01:16) - 1) n8n zero-day → unauthenticated RCE</li>
<li>(09:00) - 2) “Zombie agent” prompt injection via ChatGPT Memory</li>
<li>(19:52) - 3) “Zero-click” style agent abuse via crafted email/URL (indirect prompt injection)</li>
<li>(23:41) - 4) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (~900k users)</li>
<li>(29:59) - 5) APT28 phishing refreshed with AI-written lures</li>
<li>(34:15) - Closing thoughts: “AI genie is out of the bottle” + safety reminders</li>
</ul><p><a href="https://www.youtube.com/watch?v=bfJWxDvQPnE" title="Click here to watch a video of this episode.">Click here to watch a video of this episode.</a><br>
<strong>Creators &amp; Guests</strong>
</p><ul>
  <li><a href="https://aisecurityops.transistor.fm/people/brian-fehrman">Brian Fehrman</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/bronwen-aker">Bronwen Aker</a> - Host</li>
  <li><a href="https://aisecurityops.transistor.fm/people/derek-banks">Derek Banks</a> - Host</li>
</ul><br><p>Brought to you by:</p><p><strong>Black Hills Information Security </strong></p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p><strong>Antisyphon Training</strong></p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p><strong>Active Countermeasures</strong></p><p><a href="https://www.activecountermeasures.com">https://www.activecountermeasures.com</a></p><p><br></p><p><strong>Wild West Hackin Fest</strong></p><p><a href="https://wildwesthackinfest.com">https://wildwesthackinfest.com</a></p><p><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits</strong><br><a href="https://poweredbybhis.com">https://poweredbybhis.com</a> </p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/3a816e83/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/3a816e83/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/3a816e83/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/3a816e83/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/3a816e83/transcription" type="text/html"/>
      <podcast:chapters url="https://share.transistor.fm/s/3a816e83/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>2026 Predictions | Episode 35</title>
      <itunes:episode>35</itunes:episode>
      <podcast:episode>35</podcast:episode>
      <itunes:title>2026 Predictions | Episode 35</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d6ed0990-766f-4a5d-8181-94c40f1ad985</guid>
      <link>https://share.transistor.fm/s/9e06d6c0</link>
      <description>
        <![CDATA[<p><strong>AI Security Ops | Episode 35 – 2026 Predictions<br></strong><br>In this episode, the BHIS panel looks into the crystal ball and shares bold predictions for AI in 2026—from energy constraints and drug development breakthroughs to agentic AI risks and cybersecurity threats.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:14) - Prediction: Grid Power Becomes the Bottleneck</li>
<li>(10:27) - Prediction: FDA Qualifies AI Drug Development Tools</li>
<li>(15:45) - Prediction: Nation-State Threat Actors Weaponize AI</li>
<li>(17:33) - Prediction: Agentic AI Dominates App Development</li>
<li>(23:07) - Closing Thoughts: Jobs, Risk &amp; Opportunity</li>
</ul><br>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p><br></p><p>Brought to you by:</p><p>Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>Antisyphon Training</p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p>Active Countermeasures</p><p><a href="https://www.activecountermeasures.com">https://www.activecountermeasures.com</a></p><p><br></p><p>Wild West Hackin Fest</p><p><a href="https://wildwesthackinfest.com">https://wildwesthackinfest.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><strong>AI Security Ops | Episode 35 – 2026 Predictions<br></strong><br>In this episode, the BHIS panel looks into the crystal ball and shares bold predictions for AI in 2026—from energy constraints and drug development breakthroughs to agentic AI risks and cybersecurity threats.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:14) - Prediction: Grid Power Becomes the Bottleneck</li>
<li>(10:27) - Prediction: FDA Qualifies AI Drug Development Tools</li>
<li>(15:45) - Prediction: Nation-State Threat Actors Weaponize AI</li>
<li>(17:33) - Prediction: Agentic AI Dominates App Development</li>
<li>(23:07) - Closing Thoughts: Jobs, Risk &amp; Opportunity</li>
</ul><br>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p><br></p><p>Brought to you by:</p><p>Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>Antisyphon Training</p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p>Active Countermeasures</p><p><a href="https://www.activecountermeasures.com">https://www.activecountermeasures.com</a></p><p><br></p><p>Wild West Hackin Fest</p><p><a href="https://wildwesthackinfest.com">https://wildwesthackinfest.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </content:encoded>
      <pubDate>Thu, 08 Jan 2026 08:00:00 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/9e06d6c0/f5251835.mp3" length="24316700" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/1gOC1qb8xZxUv7-sVmTPn8r2FxWEy0UxvXySfzQNkBo/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kY2Y0/ZDE0MjFhNGY1M2I1/NzRhZGIwNWY5NTkz/ZDJlMS5wbmc.jpg"/>
      <itunes:duration>1490</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><strong>AI Security Ops | Episode 35 – 2026 Predictions<br></strong><br>In this episode, the BHIS panel looks into the crystal ball and shares bold predictions for AI in 2026—from energy constraints and drug development breakthroughs to agentic AI risks and cybersecurity threats.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:14) - Prediction: Grid Power Becomes the Bottleneck</li>
<li>(10:27) - Prediction: FDA Qualifies AI Drug Development Tools</li>
<li>(15:45) - Prediction: Nation-State Threat Actors Weaponize AI</li>
<li>(17:33) - Prediction: Agentic AI Dominates App Development</li>
<li>(23:07) - Closing Thoughts: Jobs, Risk &amp; Opportunity</li>
</ul><br>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p><br></p><p>Brought to you by:</p><p>Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>Antisyphon Training</p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p>Active Countermeasures</p><p><a href="https://www.activecountermeasures.com">https://www.activecountermeasures.com</a></p><p><br></p><p>Wild West Hackin Fest</p><p><a href="https://wildwesthackinfest.com">https://wildwesthackinfest.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/9e06d6c0/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/9e06d6c0/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/9e06d6c0/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/9e06d6c0/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/9e06d6c0/transcription" type="text/html"/>
      <podcast:chapters url="https://share.transistor.fm/s/9e06d6c0/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>AI Security Ops - Why Did We Create This Podcast? | Podcast Trailer</title>
      <itunes:episode>1</itunes:episode>
      <podcast:episode>1</podcast:episode>
      <itunes:title>AI Security Ops - Why Did We Create This Podcast? | Podcast Trailer</itunes:title>
      <itunes:episodeType>trailer</itunes:episodeType>
      <guid isPermaLink="false">ad31dc7e-a17f-4243-a7eb-618940947ca5</guid>
      <link>https://share.transistor.fm/s/316653b6</link>
      <description>
        <![CDATA[<p>Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. <br><a href="https://discord.gg/bhis">https://discord.gg/bhis</a></p><p>AI Security Ops | Episode 34 – Why Did We Create This Podcast?<br>In this episode, the BHIS team explains the purpose behind AI Security Ops, what you can expect from future episodes, and why this show matters for anyone at the intersection of AI and cybersecurity.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Intro &amp; Welcome</li>
<li>(00:13) - Why We Started AI Security Ops</li>
<li>(00:41) - Our Mission: Stay Informed &amp; Ahead</li>
<li>(00:56) - What We Cover: AI News &amp; Insights</li>
<li>(01:23) - Community Q&amp;A &amp; Real-World Scenarios</li>
<li>(02:18) - Special Guests &amp; Industry Leaders</li>
<li>(02:41) - Demos, How-Tos &amp; Practical Tips</li>
<li>(03:07) - Who Should Listen &amp; Why Subscribe</li>
<li>(03:34) - Join the Conversation &amp; Closing</li>
</ul><br><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </strong><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p><br></p><p><strong>Brought to you by:</strong></p><p>Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>Antisyphon Training</p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p>Active Countermeasures</p><p><a href="https://www.activecountermeasures.com">https://www.activecountermeasures.com</a></p><p><br></p><p>Wild West Hackin Fest</p><p><a href="https://wildwesthackinfest.com">https://wildwesthackinfest.com</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. <br><a href="https://discord.gg/bhis">https://discord.gg/bhis</a></p><p>AI Security Ops | Episode 34 – Why Did We Create This Podcast?<br>In this episode, the BHIS team explains the purpose behind AI Security Ops, what you can expect from future episodes, and why this show matters for anyone at the intersection of AI and cybersecurity.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Intro &amp; Welcome</li>
<li>(00:13) - Why We Started AI Security Ops</li>
<li>(00:41) - Our Mission: Stay Informed &amp; Ahead</li>
<li>(00:56) - What We Cover: AI News &amp; Insights</li>
<li>(01:23) - Community Q&amp;A &amp; Real-World Scenarios</li>
<li>(02:18) - Special Guests &amp; Industry Leaders</li>
<li>(02:41) - Demos, How-Tos &amp; Practical Tips</li>
<li>(03:07) - Who Should Listen &amp; Why Subscribe</li>
<li>(03:34) - Join the Conversation &amp; Closing</li>
</ul><br><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </strong><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p><br></p><p><strong>Brought to you by:</strong></p><p>Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>Antisyphon Training</p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p>Active Countermeasures</p><p><a href="https://www.activecountermeasures.com">https://www.activecountermeasures.com</a></p><p><br></p><p>Wild West Hackin Fest</p><p><a href="https://wildwesthackinfest.com">https://wildwesthackinfest.com</a></p>]]>
      </content:encoded>
      <pubDate>Wed, 24 Dec 2025 08:00:00 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/316653b6/6ebca129.mp3" length="4169873" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/3zFP5WzCxb4sSdvsrxqQkWvXKvGdyW74dFRB29E8RFw/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZjky/MjA0ODJjMDdmZGU1/OGZiODRjZDEzZDMw/Mjg5ZC5wbmc.jpg"/>
      <itunes:duration>233</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. <br><a href="https://discord.gg/bhis">https://discord.gg/bhis</a></p><p>AI Security Ops | Episode 34 – Why Did We Create This Podcast?<br>In this episode, the BHIS team explains the purpose behind AI Security Ops, what you can expect from future episodes, and why this show matters for anyone at the intersection of AI and cybersecurity.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Intro &amp; Welcome</li>
<li>(00:13) - Why We Started AI Security Ops</li>
<li>(00:41) - Our Mission: Stay Informed &amp; Ahead</li>
<li>(00:56) - What We Cover: AI News &amp; Insights</li>
<li>(01:23) - Community Q&amp;A &amp; Real-World Scenarios</li>
<li>(02:18) - Special Guests &amp; Industry Leaders</li>
<li>(02:41) - Demos, How-Tos &amp; Practical Tips</li>
<li>(03:07) - Who Should Listen &amp; Why Subscribe</li>
<li>(03:34) - Join the Conversation &amp; Closing</li>
</ul><br><strong>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </strong><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p><br></p><p><strong>Brought to you by:</strong></p><p>Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>Antisyphon Training</p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p>Active Countermeasures</p><p><a href="https://www.activecountermeasures.com">https://www.activecountermeasures.com</a></p><p><br></p><p>Wild West Hackin Fest</p><p><a href="https://wildwesthackinfest.com">https://wildwesthackinfest.com</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/316653b6/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/316653b6/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/316653b6/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/316653b6/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/316653b6/transcription" type="text/html"/>
      <podcast:chapters url="https://share.transistor.fm/s/316653b6/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Community Q&amp;A on AI Security | Episode 34</title>
      <itunes:episode>34</itunes:episode>
      <podcast:episode>34</podcast:episode>
      <itunes:title>Community Q&amp;A on AI Security | Episode 34</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7069db31-d4aa-4209-aee7-55f42137caa2</guid>
      <link>https://share.transistor.fm/s/a1c4b52f</link>
      <description>
        <![CDATA[<p>Community Q&amp;A on AI Security | Episode 34</p><p>In this episode of BHIS Presents: AI Security Ops, our panel tackles real questions from the community about AI, hallucinations, privacy, and practical use cases. From limiting model hallucinations to understanding memory features and explaining AI to non-technical audiences, we dive into the nuances of large language models and their role in cybersecurity.</p><p>We break down:</p><ul><li>Why LLMs sometimes “make stuff up” and how to reduce hallucinations</li><li>The role of prompts, temperature, and RAG databases in accuracy</li><li>Prompting best practices and reasoning modes for better results</li><li>Legal liability: Can you sue ChatGPT for bad advice?</li><li>Memory features, data retention, and privacy trade-offs</li><li>Security paranoia: AI apps, trust, and enterprise vs free accounts</li><li>Practical examples like customizing AI for writing style</li><li>How to explain AI to your mom (or any non-technical audience)</li><li>Why AI isn’t magic—just math and advanced auto-complete</li></ul><p><br>Whether you’re deploying AI tools or just curious about the hype, this episode will help you understand the realities of AI in security and how to use it responsibly.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Welcome &amp; Sponsor Shoutouts</li>
<li>(00:50) - Episode Overview: Community Q&amp;A</li>
<li>(01:19) - Q1: Will ChatGPT Make Stuff Up?</li>
<li>(07:50) - Q2: Can Lawyers Sue ChatGPT for False Cases?</li>
<li>(11:15) - Q3: How Can AI Improve Without Ingesting Everything?</li>
<li>(22:04) - Q4: How Do You Explain AI to Non-Technical People?</li>
<li>(28:00) - Closing Remarks &amp; Training Plug</li>
</ul><br>Brought to you by:<br>Black Hills Information Security <br><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a><p>Antisyphon Training<br><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p>Active Countermeasures<br><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p>Wild West Hackin Fest<br><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p>----------------------------------------------------------------------------------------------<br>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a><br>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a><br>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a><br>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a><br>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Community Q&amp;A on AI Security | Episode 34</p><p>In this episode of BHIS Presents: AI Security Ops, our panel tackles real questions from the community about AI, hallucinations, privacy, and practical use cases. From limiting model hallucinations to understanding memory features and explaining AI to non-technical audiences, we dive into the nuances of large language models and their role in cybersecurity.</p><p>We break down:</p><ul><li>Why LLMs sometimes “make stuff up” and how to reduce hallucinations</li><li>The role of prompts, temperature, and RAG databases in accuracy</li><li>Prompting best practices and reasoning modes for better results</li><li>Legal liability: Can you sue ChatGPT for bad advice?</li><li>Memory features, data retention, and privacy trade-offs</li><li>Security paranoia: AI apps, trust, and enterprise vs free accounts</li><li>Practical examples like customizing AI for writing style</li><li>How to explain AI to your mom (or any non-technical audience)</li><li>Why AI isn’t magic—just math and advanced auto-complete</li></ul><p><br>Whether you’re deploying AI tools or just curious about the hype, this episode will help you understand the realities of AI in security and how to use it responsibly.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Welcome &amp; Sponsor Shoutouts</li>
<li>(00:50) - Episode Overview: Community Q&amp;A</li>
<li>(01:19) - Q1: Will ChatGPT Make Stuff Up?</li>
<li>(07:50) - Q2: Can Lawyers Sue ChatGPT for False Cases?</li>
<li>(11:15) - Q3: How Can AI Improve Without Ingesting Everything?</li>
<li>(22:04) - Q4: How Do You Explain AI to Non-Technical People?</li>
<li>(28:00) - Closing Remarks &amp; Training Plug</li>
</ul><br>Brought to you by:<br>Black Hills Information Security <br><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a><p>Antisyphon Training<br><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p>Active Countermeasures<br><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p>Wild West Hackin Fest<br><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p>----------------------------------------------------------------------------------------------<br>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a><br>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a><br>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a><br>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a><br>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </content:encoded>
      <pubDate>Thu, 18 Dec 2025 18:19:03 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/a1c4b52f/4db14e41.mp3" length="27828964" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/1J9LTRO8FbGVPXgZKNpceeWxFMazsqLA5-eCRj1s7_M/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80MDQ5/Y2I0ZWM1OTY4MDdi/YTBkYTk0ZTAxYjMx/NWZkZS5wbmc.jpg"/>
      <itunes:duration>1708</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Community Q&amp;A on AI Security | Episode 34</p><p>In this episode of BHIS Presents: AI Security Ops, our panel tackles real questions from the community about AI, hallucinations, privacy, and practical use cases. From limiting model hallucinations to understanding memory features and explaining AI to non-technical audiences, we dive into the nuances of large language models and their role in cybersecurity.</p><p>We break down:</p><ul><li>Why LLMs sometimes “make stuff up” and how to reduce hallucinations</li><li>The role of prompts, temperature, and RAG databases in accuracy</li><li>Prompting best practices and reasoning modes for better results</li><li>Legal liability: Can you sue ChatGPT for bad advice?</li><li>Memory features, data retention, and privacy trade-offs</li><li>Security paranoia: AI apps, trust, and enterprise vs free accounts</li><li>Practical examples like customizing AI for writing style</li><li>How to explain AI to your mom (or any non-technical audience)</li><li>Why AI isn’t magic—just math and advanced auto-complete</li></ul><p><br>Whether you’re deploying AI tools or just curious about the hype, this episode will help you understand the realities of AI in security and how to use it responsibly.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Welcome &amp; Sponsor Shoutouts</li>
<li>(00:50) - Episode Overview: Community Q&amp;A</li>
<li>(01:19) - Q1: Will ChatGPT Make Stuff Up?</li>
<li>(07:50) - Q2: Can Lawyers Sue ChatGPT for False Cases?</li>
<li>(11:15) - Q3: How Can AI Improve Without Ingesting Everything?</li>
<li>(22:04) - Q4: How Do You Explain AI to Non-Technical People?</li>
<li>(28:00) - Closing Remarks &amp; Training Plug</li>
</ul><br>Brought to you by:<br>Black Hills Information Security <br><a href="https://www.blackhillsinfosec.com/">https://www.blackhillsinfosec.com</a><p>Antisyphon Training<br><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p>Active Countermeasures<br><a href="https://www.activecountermeasures.com/">https://www.activecountermeasures.com</a></p><p>Wild West Hackin Fest<br><a href="https://wildwesthackinfest.com/">https://wildwesthackinfest.com</a></p><p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p>----------------------------------------------------------------------------------------------<br>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a><br>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a><br>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a><br>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a><br>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/a1c4b52f/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/a1c4b52f/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/a1c4b52f/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/a1c4b52f/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/a1c4b52f/transcription" type="text/html"/>
      <podcast:chapters url="https://share.transistor.fm/s/a1c4b52f/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>AI News Stories | Episode 33</title>
      <itunes:episode>33</itunes:episode>
      <podcast:episode>33</podcast:episode>
      <itunes:title>AI News Stories | Episode 33</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d7674048-eb49-4e78-91ec-e3ec7a450498</guid>
      <link>https://share.transistor.fm/s/cdc59387</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p><strong>AI News | Episode 33</strong><br>In this episode of BHIS Presents: AI Security Ops, the panel dives into the latest developments shaping the AI security landscape. From the first documented AI-orchestrated cyber-espionage campaign to polymorphic malware powered by Gemini, we explore how agentic AI, insecure infrastructure, and old-school mistakes are creating a fragile new attack surface.</p><p>We break down:</p><ul><li>AI-driven cyber espionage: Anthropic disrupts a state-sponsored campaign using autonomous </li><li>Black-hat LLMs: KawaiiGPT democratizes offensive capabilities for script kiddies.</li><li>Critical RCEs in AI stacks: ShadowMQ vulnerabilities hit Meta, NVIDIA, Microsoft, and more.</li><li>Amazon’s private AI bug bounty: Nova models under the microscope.</li><li>Google Antigravity IDE popped in 24 hours: Persistent code execution flaw.</li><li>PROMPTFLUX malware: Polymorphic VBScript leveraging Gemini for hourly rewrites.</li></ul><p><br>Whether you’re defending enterprise AI deployments or building secure agentic tools, this episode will help you understand the emerging risks and what you can do to stay ahead.</p><p><strong>⏱️ Chapters<br></strong></p><ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:27) - AI-Orchestrated Cyber Espionage (Anthropic)</li>
<li>(08:10) - ShadowMQ: Critical RCE in AI Inference Engines</li>
<li>(09:54) - KawaiiGPT: Free Black-Hat LLM</li>
<li>(22:45) - Amazon Nova: Private AI Bug Bounty</li>
<li>(26:38) - Google Antigravity IDE Hacked in 24 Hours</li>
<li>(31:36) - PROMPTFLUX: Malware Using Gemini for Polymorphism</li>
</ul><br>🔗 <strong>Links<br></strong><a href="https://www.anthropic.com/news/disrupting-AI-espionage">AI-Orchestrated Cyber Espionage (Anthropic)</a><br><a href="https://cybersecuritynews.com/ai-inference-engines-rce-vulnerabilities/">ShadowMQ: Critical RCE in AI Inference Engines</a><br><a href="https://cybersecuritynews.com/kawaiigpt-black-hat-ai/">KawaiiGPT: Free Black-Hat LLM</a><br><a href="https://www.amazon.science/news/amazon-launches-private-ai-bug-bounty-to-strengthen-nova-models">Amazon Nova: Private AI Bug Bounty</a><br><a href="https://www.forbes.com/sites/thomasbrewster/2025/11/26/google-antigravity-ai-hacked/">Google Antigravity IDE Hacked in 24 Hours</a><br><a href="https://thehackernews.com/2025/11/google-uncovers-promptflux-malware-that.html">PROMPTFLUX: Malware Using Gemini for Polymorphism</a><strong><br></strong><br>#AISecurity #Cybersecurity #BHIS #LLMSecurity #AIThreats #AgenticAI #BugBounty #malware<p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>Antisyphon Training</p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p><strong>AI News | Episode 33</strong><br>In this episode of BHIS Presents: AI Security Ops, the panel dives into the latest developments shaping the AI security landscape. From the first documented AI-orchestrated cyber-espionage campaign to polymorphic malware powered by Gemini, we explore how agentic AI, insecure infrastructure, and old-school mistakes are creating a fragile new attack surface.</p><p>We break down:</p><ul><li>AI-driven cyber espionage: Anthropic disrupts a state-sponsored campaign using autonomous </li><li>Black-hat LLMs: KawaiiGPT democratizes offensive capabilities for script kiddies.</li><li>Critical RCEs in AI stacks: ShadowMQ vulnerabilities hit Meta, NVIDIA, Microsoft, and more.</li><li>Amazon’s private AI bug bounty: Nova models under the microscope.</li><li>Google Antigravity IDE popped in 24 hours: Persistent code execution flaw.</li><li>PROMPTFLUX malware: Polymorphic VBScript leveraging Gemini for hourly rewrites.</li></ul><p><br>Whether you’re defending enterprise AI deployments or building secure agentic tools, this episode will help you understand the emerging risks and what you can do to stay ahead.</p><p><strong>⏱️ Chapters<br></strong></p><ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:27) - AI-Orchestrated Cyber Espionage (Anthropic)</li>
<li>(08:10) - ShadowMQ: Critical RCE in AI Inference Engines</li>
<li>(09:54) - KawaiiGPT: Free Black-Hat LLM</li>
<li>(22:45) - Amazon Nova: Private AI Bug Bounty</li>
<li>(26:38) - Google Antigravity IDE Hacked in 24 Hours</li>
<li>(31:36) - PROMPTFLUX: Malware Using Gemini for Polymorphism</li>
</ul><br>🔗 <strong>Links<br></strong><a href="https://www.anthropic.com/news/disrupting-AI-espionage">AI-Orchestrated Cyber Espionage (Anthropic)</a><br><a href="https://cybersecuritynews.com/ai-inference-engines-rce-vulnerabilities/">ShadowMQ: Critical RCE in AI Inference Engines</a><br><a href="https://cybersecuritynews.com/kawaiigpt-black-hat-ai/">KawaiiGPT: Free Black-Hat LLM</a><br><a href="https://www.amazon.science/news/amazon-launches-private-ai-bug-bounty-to-strengthen-nova-models">Amazon Nova: Private AI Bug Bounty</a><br><a href="https://www.forbes.com/sites/thomasbrewster/2025/11/26/google-antigravity-ai-hacked/">Google Antigravity IDE Hacked in 24 Hours</a><br><a href="https://thehackernews.com/2025/11/google-uncovers-promptflux-malware-that.html">PROMPTFLUX: Malware Using Gemini for Polymorphism</a><strong><br></strong><br>#AISecurity #Cybersecurity #BHIS #LLMSecurity #AIThreats #AgenticAI #BugBounty #malware<p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>Antisyphon Training</p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </content:encoded>
      <pubDate>Thu, 11 Dec 2025 16:31:31 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/cdc59387/8363a5c3.mp3" length="35852838" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/LZLW8PDnwEzgNKxOgrb2V24OsNsvoNJKgTUjeBoxvp8/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZWUw/M2QwNWZmN2Q5ZjQy/YTFlMDdjNWE4YjQz/NGM5Mi5wbmc.jpg"/>
      <itunes:duration>2233</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p><strong>AI News | Episode 33</strong><br>In this episode of BHIS Presents: AI Security Ops, the panel dives into the latest developments shaping the AI security landscape. From the first documented AI-orchestrated cyber-espionage campaign to polymorphic malware powered by Gemini, we explore how agentic AI, insecure infrastructure, and old-school mistakes are creating a fragile new attack surface.</p><p>We break down:</p><ul><li>AI-driven cyber espionage: Anthropic disrupts a state-sponsored campaign using autonomous </li><li>Black-hat LLMs: KawaiiGPT democratizes offensive capabilities for script kiddies.</li><li>Critical RCEs in AI stacks: ShadowMQ vulnerabilities hit Meta, NVIDIA, Microsoft, and more.</li><li>Amazon’s private AI bug bounty: Nova models under the microscope.</li><li>Google Antigravity IDE popped in 24 hours: Persistent code execution flaw.</li><li>PROMPTFLUX malware: Polymorphic VBScript leveraging Gemini for hourly rewrites.</li></ul><p><br>Whether you’re defending enterprise AI deployments or building secure agentic tools, this episode will help you understand the emerging risks and what you can do to stay ahead.</p><p><strong>⏱️ Chapters<br></strong></p><ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:27) - AI-Orchestrated Cyber Espionage (Anthropic)</li>
<li>(08:10) - ShadowMQ: Critical RCE in AI Inference Engines</li>
<li>(09:54) - KawaiiGPT: Free Black-Hat LLM</li>
<li>(22:45) - Amazon Nova: Private AI Bug Bounty</li>
<li>(26:38) - Google Antigravity IDE Hacked in 24 Hours</li>
<li>(31:36) - PROMPTFLUX: Malware Using Gemini for Polymorphism</li>
</ul><br>🔗 <strong>Links<br></strong><a href="https://www.anthropic.com/news/disrupting-AI-espionage">AI-Orchestrated Cyber Espionage (Anthropic)</a><br><a href="https://cybersecuritynews.com/ai-inference-engines-rce-vulnerabilities/">ShadowMQ: Critical RCE in AI Inference Engines</a><br><a href="https://cybersecuritynews.com/kawaiigpt-black-hat-ai/">KawaiiGPT: Free Black-Hat LLM</a><br><a href="https://www.amazon.science/news/amazon-launches-private-ai-bug-bounty-to-strengthen-nova-models">Amazon Nova: Private AI Bug Bounty</a><br><a href="https://www.forbes.com/sites/thomasbrewster/2025/11/26/google-antigravity-ai-hacked/">Google Antigravity IDE Hacked in 24 Hours</a><br><a href="https://thehackernews.com/2025/11/google-uncovers-promptflux-malware-that.html">PROMPTFLUX: Malware Using Gemini for Polymorphism</a><strong><br></strong><br>#AISecurity #Cybersecurity #BHIS #LLMSecurity #AIThreats #AgenticAI #BugBounty #malware<p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>Antisyphon Training</p><p><a href="https://www.antisyphontraining.com/">https://www.antisyphontraining.com/</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>AISecurity,Cybersecurity,BHIS,LLMSecurity,AIThreats,AgenticAI,BugBounty,malware</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/cdc59387/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/cdc59387/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/cdc59387/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/cdc59387/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/cdc59387/transcription" type="text/html"/>
      <podcast:chapters url="https://share.transistor.fm/s/cdc59387/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Model Evasion Attacks | Episode 32</title>
      <itunes:episode>32</itunes:episode>
      <podcast:episode>32</podcast:episode>
      <itunes:title>Model Evasion Attacks | Episode 32</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">57e7392f-7a91-4c50-8815-44f198ea5af8</guid>
      <link>https://share.transistor.fm/s/5a99cb37</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p>Model Evasion Attacks | Episode 32<br>In this episode of BHIS Presents: AI Security Ops, the panel explores the stealthy world of model evasion attacks, where adversaries manipulate inputs to trick AI classifiers into misclassifying malicious activity as benign. From image classifiers to malware detection and even LLM-based systems, learn how attackers exploit decision boundaries and why this matters for cybersecurity.</p><p>We break down:<br>- What model evasion attacks are and how they differ from data poisoning<br>- How attackers tweak features to bypass classifiers (images, phishing, malware)<br>- Real-world tactics like model extraction and trial-and-error evasion<br>- Why non-determinism in AI models makes evasion harder to predict<br>- Advanced threats: model theft, ablation, and adversarial AI<br>- Defensive strategies: adversarial training, API throttling, and realistic expectations<br>- Future outlook: regulatory trends, transparency, and the ongoing arms race</p><p>Whether you’re deploying EDR solutions or fine-tuning AI models, this episode will help you understand why evasion is an enduring challenge, and what you can do to defend against it.</p><p><br>#AISecurity #ModelEvasion #Cybersecurity #BHIS #LLMSecurity #aithreats</p><p><br></p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:19) - What Are Model Evasion Attacks?</li>
<li>(03:58) - Image Classifiers &amp; Pixel Tweaks</li>
<li>(07:01) - Malware Classification &amp; Decision Boundaries</li>
<li>(10:02) - Model Theft &amp; Extraction Attacks</li>
<li>(13:16) - Non-Determinism &amp; Myth Busting</li>
<li>(16:07) - AI in Offensive Capabilities</li>
<li>(17:36) - Defensive Strategies &amp; Adversarial Training</li>
<li>(20:54) - Vendor Questions &amp; Transparency</li>
<li>(23:22) - Future Outlook &amp; Regulatory Trends</li>
<li>(25:54) - Panel Takeaways &amp; Closing Thoughts</li>
</ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p>Model Evasion Attacks | Episode 32<br>In this episode of BHIS Presents: AI Security Ops, the panel explores the stealthy world of model evasion attacks, where adversaries manipulate inputs to trick AI classifiers into misclassifying malicious activity as benign. From image classifiers to malware detection and even LLM-based systems, learn how attackers exploit decision boundaries and why this matters for cybersecurity.</p><p>We break down:<br>- What model evasion attacks are and how they differ from data poisoning<br>- How attackers tweak features to bypass classifiers (images, phishing, malware)<br>- Real-world tactics like model extraction and trial-and-error evasion<br>- Why non-determinism in AI models makes evasion harder to predict<br>- Advanced threats: model theft, ablation, and adversarial AI<br>- Defensive strategies: adversarial training, API throttling, and realistic expectations<br>- Future outlook: regulatory trends, transparency, and the ongoing arms race</p><p>Whether you’re deploying EDR solutions or fine-tuning AI models, this episode will help you understand why evasion is an enduring challenge, and what you can do to defend against it.</p><p><br>#AISecurity #ModelEvasion #Cybersecurity #BHIS #LLMSecurity #aithreats</p><p><br></p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:19) - What Are Model Evasion Attacks?</li>
<li>(03:58) - Image Classifiers &amp; Pixel Tweaks</li>
<li>(07:01) - Malware Classification &amp; Decision Boundaries</li>
<li>(10:02) - Model Theft &amp; Extraction Attacks</li>
<li>(13:16) - Non-Determinism &amp; Myth Busting</li>
<li>(16:07) - AI in Offensive Capabilities</li>
<li>(17:36) - Defensive Strategies &amp; Adversarial Training</li>
<li>(20:54) - Vendor Questions &amp; Transparency</li>
<li>(23:22) - Future Outlook &amp; Regulatory Trends</li>
<li>(25:54) - Panel Takeaways &amp; Closing Thoughts</li>
</ul>]]>
      </content:encoded>
      <pubDate>Thu, 04 Dec 2025 12:00:00 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/5a99cb37/a1df57eb.mp3" length="27504492" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/lK09NHIjSbIeHVyvmU5u4MxbuHFk-2pfdD7eYIf7zow/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hN2My/NWE3ZGNiN2ZlYWE1/MzA5ZDZkMDRmZWNk/MGZjNi5wbmc.jpg"/>
      <itunes:duration>1712</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p>Model Evasion Attacks | Episode 32<br>In this episode of BHIS Presents: AI Security Ops, the panel explores the stealthy world of model evasion attacks, where adversaries manipulate inputs to trick AI classifiers into misclassifying malicious activity as benign. From image classifiers to malware detection and even LLM-based systems, learn how attackers exploit decision boundaries and why this matters for cybersecurity.</p><p>We break down:<br>- What model evasion attacks are and how they differ from data poisoning<br>- How attackers tweak features to bypass classifiers (images, phishing, malware)<br>- Real-world tactics like model extraction and trial-and-error evasion<br>- Why non-determinism in AI models makes evasion harder to predict<br>- Advanced threats: model theft, ablation, and adversarial AI<br>- Defensive strategies: adversarial training, API throttling, and realistic expectations<br>- Future outlook: regulatory trends, transparency, and the ongoing arms race</p><p>Whether you’re deploying EDR solutions or fine-tuning AI models, this episode will help you understand why evasion is an enduring challenge, and what you can do to defend against it.</p><p><br>#AISecurity #ModelEvasion #Cybersecurity #BHIS #LLMSecurity #aithreats</p><p><br></p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:19) - What Are Model Evasion Attacks?</li>
<li>(03:58) - Image Classifiers &amp; Pixel Tweaks</li>
<li>(07:01) - Malware Classification &amp; Decision Boundaries</li>
<li>(10:02) - Model Theft &amp; Extraction Attacks</li>
<li>(13:16) - Non-Determinism &amp; Myth Busting</li>
<li>(16:07) - AI in Offensive Capabilities</li>
<li>(17:36) - Defensive Strategies &amp; Adversarial Training</li>
<li>(20:54) - Vendor Questions &amp; Transparency</li>
<li>(23:22) - Future Outlook &amp; Regulatory Trends</li>
<li>(25:54) - Panel Takeaways &amp; Closing Thoughts</li>
</ul>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/5a99cb37/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/5a99cb37/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/5a99cb37/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/5a99cb37/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/5a99cb37/transcription" type="text/html"/>
      <podcast:chapters url="https://share.transistor.fm/s/5a99cb37/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Data Poisoning | Episode 31</title>
      <itunes:episode>31</itunes:episode>
      <podcast:episode>31</podcast:episode>
      <itunes:title>Data Poisoning | Episode 31</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7242b12a-c380-4d1e-b0e2-1077df9248c1</guid>
      <link>https://share.transistor.fm/s/6ad6a5a8</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br>Data Poisoning Attacks | Episode 31<br>In this episode of BHIS Presents: AI Security Ops, the panel dives into the hidden danger of data poisoning – where attackers corrupt the data that trains your AI models, leading to unpredictable and often harmful behavior. From classifiers to LLMs, discover why poisoned data can undermine security, accuracy, and trust in AI systems.</p><p>We break down:</p><ul><li>What data poisoning is and why it matters</li><li>How attackers inject malicious samples or flip labels in training sets</li><li>The role of open-source repositories like Hugging Face in supply chain risk</li><li>New twists for LLMs: poisoning via reinforcement feedback and RAG</li><li>Real-world concerns like bias in ChatGPT and malicious model uploads</li><li>Defensive strategies: governance, provenance, versioning, and security assessments</li></ul><p><br>Whether you’re building classifiers or fine-tuning LLMs, this episode will help you understand how poisoned data sneaks in, and what you can do to prevent it. Treat your AI like a “drunk intern”: verify everything.</p><p><br>#aisecurity  #DataPoisoning #Cybersecurity #BHIS #llmsecurity  #aithreats</p><p><br></p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:19) - What Is Data Poisoning?</li>
<li>(03:58) - Poisoning Classifier Models</li>
<li>(08:10) - Risks in Open-Source Data Sets</li>
<li>(12:30) - LLM-Specific Poisoning Vectors</li>
<li>(17:04) - RAG and Context Injection</li>
<li>(21:25) - Realistic Threats &amp; Examples</li>
<li>(25:48) - Defensive Strategies &amp; Governance</li>
<li>(28:27) - Panel Takeaways &amp; Closing Thoughts</li>
</ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br>Data Poisoning Attacks | Episode 31<br>In this episode of BHIS Presents: AI Security Ops, the panel dives into the hidden danger of data poisoning – where attackers corrupt the data that trains your AI models, leading to unpredictable and often harmful behavior. From classifiers to LLMs, discover why poisoned data can undermine security, accuracy, and trust in AI systems.</p><p>We break down:</p><ul><li>What data poisoning is and why it matters</li><li>How attackers inject malicious samples or flip labels in training sets</li><li>The role of open-source repositories like Hugging Face in supply chain risk</li><li>New twists for LLMs: poisoning via reinforcement feedback and RAG</li><li>Real-world concerns like bias in ChatGPT and malicious model uploads</li><li>Defensive strategies: governance, provenance, versioning, and security assessments</li></ul><p><br>Whether you’re building classifiers or fine-tuning LLMs, this episode will help you understand how poisoned data sneaks in, and what you can do to prevent it. Treat your AI like a “drunk intern”: verify everything.</p><p><br>#aisecurity  #DataPoisoning #Cybersecurity #BHIS #llmsecurity  #aithreats</p><p><br></p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:19) - What Is Data Poisoning?</li>
<li>(03:58) - Poisoning Classifier Models</li>
<li>(08:10) - Risks in Open-Source Data Sets</li>
<li>(12:30) - LLM-Specific Poisoning Vectors</li>
<li>(17:04) - RAG and Context Injection</li>
<li>(21:25) - Realistic Threats &amp; Examples</li>
<li>(25:48) - Defensive Strategies &amp; Governance</li>
<li>(28:27) - Panel Takeaways &amp; Closing Thoughts</li>
</ul>]]>
      </content:encoded>
      <pubDate>Thu, 27 Nov 2025 12:00:00 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/6ad6a5a8/c3d8341f.mp3" length="30209782" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/LmP2wIDLEp7vMiNqZvuwaZmK0JS2w9hIc4tNlElQCNI/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hMDk4/OTM3NjZjYThlYjhh/NzBjOWQxYzZmMWE1/ZDc3MS5wbmc.jpg"/>
      <itunes:duration>1880</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br>Data Poisoning Attacks | Episode 31<br>In this episode of BHIS Presents: AI Security Ops, the panel dives into the hidden danger of data poisoning – where attackers corrupt the data that trains your AI models, leading to unpredictable and often harmful behavior. From classifiers to LLMs, discover why poisoned data can undermine security, accuracy, and trust in AI systems.</p><p>We break down:</p><ul><li>What data poisoning is and why it matters</li><li>How attackers inject malicious samples or flip labels in training sets</li><li>The role of open-source repositories like Hugging Face in supply chain risk</li><li>New twists for LLMs: poisoning via reinforcement feedback and RAG</li><li>Real-world concerns like bias in ChatGPT and malicious model uploads</li><li>Defensive strategies: governance, provenance, versioning, and security assessments</li></ul><p><br>Whether you’re building classifiers or fine-tuning LLMs, this episode will help you understand how poisoned data sneaks in, and what you can do to prevent it. Treat your AI like a “drunk intern”: verify everything.</p><p><br>#aisecurity  #DataPoisoning #Cybersecurity #BHIS #llmsecurity  #aithreats</p><p><br></p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:19) - What Is Data Poisoning?</li>
<li>(03:58) - Poisoning Classifier Models</li>
<li>(08:10) - Risks in Open-Source Data Sets</li>
<li>(12:30) - LLM-Specific Poisoning Vectors</li>
<li>(17:04) - RAG and Context Injection</li>
<li>(21:25) - Realistic Threats &amp; Examples</li>
<li>(25:48) - Defensive Strategies &amp; Governance</li>
<li>(28:27) - Panel Takeaways &amp; Closing Thoughts</li>
</ul>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/6ad6a5a8/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>AI News Stories | Episode 30</title>
      <itunes:episode>30</itunes:episode>
      <podcast:episode>30</podcast:episode>
      <itunes:title>AI News Stories | Episode 30</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0ba3e45c-566e-4982-af3f-7069ff7602e1</guid>
      <link>https://share.transistor.fm/s/953ef94e</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>AI News Stories | Episode 30<br>In this episode of BHIS Presents: AI Security Ops, we break down the top AI cybersecurity news and trends from November 2025. Our panel covers rising public awareness of AI, the security risks of local LLMs, emerging AI-driven threats, and what these developments mean for security teams. Whether you work in cybersecurity, AI security, or incident response, this episode helps you stay ahead of evolving AI-powered attacks and defenses.</p><p>Topics Covered:</p><p>Only 5% of Americans are unaware of AI?<br>What Pew Research reveals about AI’s penetration into everyday life and workplace usage.<br>AI’s Shift to the Intimacy Economy – Project Liberty<br><a href="https://email.projectliberty.io/ais-shift-to-the-intimacy-economy-1">https://email.projectliberty.io/ais-shift-to-the-intimacy-economy-1  </a></p><p>Amazon to Cut Jobs and Invest in AI Infrastructure<br>14,000 corporate roles eliminated—are layoffs really about efficiency or something else?<br>Amazon to Cut Jobs &amp; Invest in AI – DW<br><a href="https://www.dw.com/en/amazon-to-cut-14000-corporate-jobs-amid-ai-investment/a-74524365">https://www.dw.com/en/amazon-to-cut-14000-corporate-jobs-amid-ai-investment/a-74524365</a></p><p>Local Models Less Secure than Cloud Providers?<br>Why quantization and lack of guardrails make local LLMs more vulnerable to prompt injection and insecure code.<br>Local LLMs Security Paradox – Quesma<br><a href="https://quesma.com/blog/local-llms-security-paradox">https://quesma.com/blog/local-llms-security-paradox </a></p><p>Whether you're a red teamer, SOC analyst, or just trying to stay ahead of AI threats, this episode delivers sharp insights and practical takeaways.</p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:07) - AI’s Shift to the Intimacy Economy (Pew Research)</li>
<li>(19:40) - Amazon Layoffs &amp; AI Investment</li>
<li>(27:00) - Local LLM Security Paradox</li>
<li>(36:32) - Wrap-Up &amp; Key Takeaways</li>
</ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>AI News Stories | Episode 30<br>In this episode of BHIS Presents: AI Security Ops, we break down the top AI cybersecurity news and trends from November 2025. Our panel covers rising public awareness of AI, the security risks of local LLMs, emerging AI-driven threats, and what these developments mean for security teams. Whether you work in cybersecurity, AI security, or incident response, this episode helps you stay ahead of evolving AI-powered attacks and defenses.</p><p>Topics Covered:</p><p>Only 5% of Americans are unaware of AI?<br>What Pew Research reveals about AI’s penetration into everyday life and workplace usage.<br>AI’s Shift to the Intimacy Economy – Project Liberty<br><a href="https://email.projectliberty.io/ais-shift-to-the-intimacy-economy-1">https://email.projectliberty.io/ais-shift-to-the-intimacy-economy-1  </a></p><p>Amazon to Cut Jobs and Invest in AI Infrastructure<br>14,000 corporate roles eliminated—are layoffs really about efficiency or something else?<br>Amazon to Cut Jobs &amp; Invest in AI – DW<br><a href="https://www.dw.com/en/amazon-to-cut-14000-corporate-jobs-amid-ai-investment/a-74524365">https://www.dw.com/en/amazon-to-cut-14000-corporate-jobs-amid-ai-investment/a-74524365</a></p><p>Local Models Less Secure than Cloud Providers?<br>Why quantization and lack of guardrails make local LLMs more vulnerable to prompt injection and insecure code.<br>Local LLMs Security Paradox – Quesma<br><a href="https://quesma.com/blog/local-llms-security-paradox">https://quesma.com/blog/local-llms-security-paradox </a></p><p>Whether you're a red teamer, SOC analyst, or just trying to stay ahead of AI threats, this episode delivers sharp insights and practical takeaways.</p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:07) - AI’s Shift to the Intimacy Economy (Pew Research)</li>
<li>(19:40) - Amazon Layoffs &amp; AI Investment</li>
<li>(27:00) - Local LLM Security Paradox</li>
<li>(36:32) - Wrap-Up &amp; Key Takeaways</li>
</ul>]]>
      </content:encoded>
      <pubDate>Thu, 20 Nov 2025 12:00:00 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/953ef94e/64331bfa.mp3" length="35726273" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/BDwyZqouIr42WI0dgUKl9jGBuAwP-PjZ2A3koBKkxwA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iN2M1/YmFiNjUwMjAyNTBl/NDg2N2QzMGJkNGU4/OGQwMy5wbmc.jpg"/>
      <itunes:duration>2225</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>AI News Stories | Episode 30<br>In this episode of BHIS Presents: AI Security Ops, we break down the top AI cybersecurity news and trends from November 2025. Our panel covers rising public awareness of AI, the security risks of local LLMs, emerging AI-driven threats, and what these developments mean for security teams. Whether you work in cybersecurity, AI security, or incident response, this episode helps you stay ahead of evolving AI-powered attacks and defenses.</p><p>Topics Covered:</p><p>Only 5% of Americans are unaware of AI?<br>What Pew Research reveals about AI’s penetration into everyday life and workplace usage.<br>AI’s Shift to the Intimacy Economy – Project Liberty<br><a href="https://email.projectliberty.io/ais-shift-to-the-intimacy-economy-1">https://email.projectliberty.io/ais-shift-to-the-intimacy-economy-1  </a></p><p>Amazon to Cut Jobs and Invest in AI Infrastructure<br>14,000 corporate roles eliminated—are layoffs really about efficiency or something else?<br>Amazon to Cut Jobs &amp; Invest in AI – DW<br><a href="https://www.dw.com/en/amazon-to-cut-14000-corporate-jobs-amid-ai-investment/a-74524365">https://www.dw.com/en/amazon-to-cut-14000-corporate-jobs-amid-ai-investment/a-74524365</a></p><p>Local Models Less Secure than Cloud Providers?<br>Why quantization and lack of guardrails make local LLMs more vulnerable to prompt injection and insecure code.<br>Local LLMs Security Paradox – Quesma<br><a href="https://quesma.com/blog/local-llms-security-paradox">https://quesma.com/blog/local-llms-security-paradox </a></p><p>Whether you're a red teamer, SOC analyst, or just trying to stay ahead of AI threats, this episode delivers sharp insights and practical takeaways.</p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:07) - AI’s Shift to the Intimacy Economy (Pew Research)</li>
<li>(19:40) - Amazon Layoffs &amp; AI Investment</li>
<li>(27:00) - Local LLM Security Paradox</li>
<li>(36:32) - Wrap-Up &amp; Key Takeaways</li>
</ul>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/953ef94e/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>A Conversation with Dr. Colin Shea-Blymyer  | Episode 29</title>
      <itunes:episode>29</itunes:episode>
      <podcast:episode>29</podcast:episode>
      <itunes:title>A Conversation with Dr. Colin Shea-Blymyer  | Episode 29</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">62b223d1-6b96-498c-9e3a-661d241a0668</guid>
      <link>https://share.transistor.fm/s/77b004f8</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p>A Conversation with Dr. Colin Shea-Blymyer  | Episode 29</p><p>In this episode of BHIS Presents: AI Security Ops, the panel welcomes Dr. Colin Shea-Blymyer for a deep dive into the intersection of AI governance, cybersecurity, and red teaming. From the historical roots of neural networks to today’s regulatory patchwork, we explore how policy, security, and innovation collide in the age of AI. Expect candid insights on emerging risks, open models, and why defining your risk appetite matters more than ever.</p><p>Topics Covered:</p><ul><li>AI governance vs. innovation: U.S. vs. EU regulatory approaches</li><li>The evolution of neural networks and lessons from AI history</li><li>AI red teaming: definitions, methodologies, and data-sharing challenges</li><li>Safety vs. security: where they overlap and diverge</li><li>Emerging risks: supply chain vulnerabilities, prompt injection, and poisoned data</li><li>Open weights vs. closed models: implications for research and security</li><li>Practical takeaways for organizations navigating AI uncertainty</li></ul><p><br>About the Panel:<br>Joff Thyer, Dr. Brian Fehrman, Derek Banks<br>Guest Panelist: Dr. Colin Shea-Blymyer<br><a href="https://cset.georgetown.edu/staff/colin-shea-blymyer/">https://cset.georgetown.edu/staff/colin-shea-blymyer/</a></p><p>#aisecurity  #aigovernance  #cyberrisk  #AIredteam #OpenModels #aipolicy  #BHIS #AIthreats #aiincybersecurity  #llmsecurity</p><p><br></p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Guest Welcome</li>
<li>(02:14) - Colin’s Journey: From CS to AI Governance</li>
<li>(06:33) - Lessons from AI History &amp; Neural Network Origins</li>
<li>(10:28) - AI Red Teaming: Definitions &amp; Methodologies</li>
<li>(15:11) - Safety vs. Security: Where They Intersect</li>
<li>(22:47) - Regulatory Landscape: U.S. Patchwork vs. EU AI Act</li>
<li>(33:42) - Open Models Debate: Risks &amp; Research Benefits</li>
<li>(38:19) - Emerging Threats &amp; Supply Chain Risks</li>
<li>(44:06) - Practical Takeaways &amp; Closing Thoughts</li>
</ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p>A Conversation with Dr. Colin Shea-Blymyer  | Episode 29</p><p>In this episode of BHIS Presents: AI Security Ops, the panel welcomes Dr. Colin Shea-Blymyer for a deep dive into the intersection of AI governance, cybersecurity, and red teaming. From the historical roots of neural networks to today’s regulatory patchwork, we explore how policy, security, and innovation collide in the age of AI. Expect candid insights on emerging risks, open models, and why defining your risk appetite matters more than ever.</p><p>Topics Covered:</p><ul><li>AI governance vs. innovation: U.S. vs. EU regulatory approaches</li><li>The evolution of neural networks and lessons from AI history</li><li>AI red teaming: definitions, methodologies, and data-sharing challenges</li><li>Safety vs. security: where they overlap and diverge</li><li>Emerging risks: supply chain vulnerabilities, prompt injection, and poisoned data</li><li>Open weights vs. closed models: implications for research and security</li><li>Practical takeaways for organizations navigating AI uncertainty</li></ul><p><br>About the Panel:<br>Joff Thyer, Dr. Brian Fehrman, Derek Banks<br>Guest Panelist: Dr. Colin Shea-Blymyer<br><a href="https://cset.georgetown.edu/staff/colin-shea-blymyer/">https://cset.georgetown.edu/staff/colin-shea-blymyer/</a></p><p>#aisecurity  #aigovernance  #cyberrisk  #AIredteam #OpenModels #aipolicy  #BHIS #AIthreats #aiincybersecurity  #llmsecurity</p><p><br></p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Guest Welcome</li>
<li>(02:14) - Colin’s Journey: From CS to AI Governance</li>
<li>(06:33) - Lessons from AI History &amp; Neural Network Origins</li>
<li>(10:28) - AI Red Teaming: Definitions &amp; Methodologies</li>
<li>(15:11) - Safety vs. Security: Where They Intersect</li>
<li>(22:47) - Regulatory Landscape: U.S. Patchwork vs. EU AI Act</li>
<li>(33:42) - Open Models Debate: Risks &amp; Research Benefits</li>
<li>(38:19) - Emerging Threats &amp; Supply Chain Risks</li>
<li>(44:06) - Practical Takeaways &amp; Closing Thoughts</li>
</ul>]]>
      </content:encoded>
      <pubDate>Thu, 13 Nov 2025 12:00:00 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/77b004f8/69ec0542.mp3" length="45049260" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/DkRXmY7CbdjEoBzLmRSzs76dlDp3j1q7gUr1UIjVwmw/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yZDdj/MzJjNmMyMDZmMGFh/MWZjMWEyNjUyNzQ1/NTgxOC5wbmc.jpg"/>
      <itunes:duration>2807</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p>A Conversation with Dr. Colin Shea-Blymyer  | Episode 29</p><p>In this episode of BHIS Presents: AI Security Ops, the panel welcomes Dr. Colin Shea-Blymyer for a deep dive into the intersection of AI governance, cybersecurity, and red teaming. From the historical roots of neural networks to today’s regulatory patchwork, we explore how policy, security, and innovation collide in the age of AI. Expect candid insights on emerging risks, open models, and why defining your risk appetite matters more than ever.</p><p>Topics Covered:</p><ul><li>AI governance vs. innovation: U.S. vs. EU regulatory approaches</li><li>The evolution of neural networks and lessons from AI history</li><li>AI red teaming: definitions, methodologies, and data-sharing challenges</li><li>Safety vs. security: where they overlap and diverge</li><li>Emerging risks: supply chain vulnerabilities, prompt injection, and poisoned data</li><li>Open weights vs. closed models: implications for research and security</li><li>Practical takeaways for organizations navigating AI uncertainty</li></ul><p><br>About the Panel:<br>Joff Thyer, Dr. Brian Fehrman, Derek Banks<br>Guest Panelist: Dr. Colin Shea-Blymyer<br><a href="https://cset.georgetown.edu/staff/colin-shea-blymyer/">https://cset.georgetown.edu/staff/colin-shea-blymyer/</a></p><p>#aisecurity  #aigovernance  #cyberrisk  #AIredteam #OpenModels #aipolicy  #BHIS #AIthreats #aiincybersecurity  #llmsecurity</p><p><br></p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Guest Welcome</li>
<li>(02:14) - Colin’s Journey: From CS to AI Governance</li>
<li>(06:33) - Lessons from AI History &amp; Neural Network Origins</li>
<li>(10:28) - AI Red Teaming: Definitions &amp; Methodologies</li>
<li>(15:11) - Safety vs. Security: Where They Intersect</li>
<li>(22:47) - Regulatory Landscape: U.S. Patchwork vs. EU AI Act</li>
<li>(33:42) - Open Models Debate: Risks &amp; Research Benefits</li>
<li>(38:19) - Emerging Threats &amp; Supply Chain Risks</li>
<li>(44:06) - Practical Takeaways &amp; Closing Thoughts</li>
</ul>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Guest" href="https://cset.georgetown.edu/staff/colin-shea-blymyer/" img="https://img.transistorcdn.com/0xt9qXjwQ_xEx5-smeXJxGhTBK9BDHtgcrSniGgTS9U/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jZjEw/MDU1OWY0ZTkwOTg0/ZmRiY2IzZjhlOTg2/YmU4YS5qcGc.jpg">Dr. Colin Shea-Blymyer</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/77b004f8/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/77b004f8/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/77b004f8/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/77b004f8/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/77b004f8/transcription" type="text/html"/>
      <podcast:chapters url="https://share.transistor.fm/s/77b004f8/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Questions from the Community | Episode 28</title>
      <itunes:episode>28</itunes:episode>
      <podcast:episode>28</podcast:episode>
      <itunes:title>Questions from the Community | Episode 28</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">54eb5764-5b75-410f-8b00-969890ade0c2</guid>
      <link>https://share.transistor.fm/s/accbd8c0</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br>AI News Stories | Episode 28 – Questions from the Community<br>In this episode of BHIS Presents: AI Security Ops, the panel tackles real questions from the community, diving deep into the practical, ethical, and technical challenges of AI in cybersecurity. From red teaming tools to prompt privacy, this Q&amp;A session delivers candid insights and actionable advice for professionals navigating the AI-infused threat landscape.</p><p>🧠 Topics Covered:</p><ul><li>Open-source tools for LLM red teaming</li><li>Threat modeling AI systems (STRIDE methodology)</li><li>Hallucination rates in frontier vs. local models</li><li>Prompt privacy: what’s stored, what’s shared</li><li>Should red teamers disclose AI usage?</li><li>Human-in-the-loop: AI-generated deliverables</li><li>Whether you're a pentester, SOC analyst, or just curious about how AI is reshaping offensive security, this episode is packed with expert perspectives and practical takeaways.</li></ul><p><br>About the Panel:<br>Brian Fehrman, Derek Banks, Joff Thyer</p><p><br></p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:14) - Recommended Tools for LLM Red Teaming</li>
<li>(06:12) - Threat Modeling AI Systems</li>
<li>(09:58) - Which Models Hallucinate Most?</li>
<li>(17:13) - Prompt Privacy: What You Should Know</li>
<li>(22:54) - Should Red Teamers Disclose AI Usage?</li>
<li>(27:01) - Final Thoughts &amp; Wrap-Up</li>
</ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br>AI News Stories | Episode 28 – Questions from the Community<br>In this episode of BHIS Presents: AI Security Ops, the panel tackles real questions from the community, diving deep into the practical, ethical, and technical challenges of AI in cybersecurity. From red teaming tools to prompt privacy, this Q&amp;A session delivers candid insights and actionable advice for professionals navigating the AI-infused threat landscape.</p><p>🧠 Topics Covered:</p><ul><li>Open-source tools for LLM red teaming</li><li>Threat modeling AI systems (STRIDE methodology)</li><li>Hallucination rates in frontier vs. local models</li><li>Prompt privacy: what’s stored, what’s shared</li><li>Should red teamers disclose AI usage?</li><li>Human-in-the-loop: AI-generated deliverables</li><li>Whether you're a pentester, SOC analyst, or just curious about how AI is reshaping offensive security, this episode is packed with expert perspectives and practical takeaways.</li></ul><p><br>About the Panel:<br>Brian Fehrman, Derek Banks, Joff Thyer</p><p><br></p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:14) - Recommended Tools for LLM Red Teaming</li>
<li>(06:12) - Threat Modeling AI Systems</li>
<li>(09:58) - Which Models Hallucinate Most?</li>
<li>(17:13) - Prompt Privacy: What You Should Know</li>
<li>(22:54) - Should Red Teamers Disclose AI Usage?</li>
<li>(27:01) - Final Thoughts &amp; Wrap-Up</li>
</ul>]]>
      </content:encoded>
      <pubDate>Thu, 06 Nov 2025 12:00:00 -0500</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/accbd8c0/40c43e83.mp3" length="27425745" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/BKoBrhV_aGsm5cXWaRuq3EJ0FNQDvbyyBc8LpZGqlQw/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jZjg5/YWRjZTQ4NTVjMDZl/ZDBjZmM0ZTI4YjFk/NmRkOC5wbmc.jpg"/>
      <itunes:duration>1706</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br>AI News Stories | Episode 28 – Questions from the Community<br>In this episode of BHIS Presents: AI Security Ops, the panel tackles real questions from the community, diving deep into the practical, ethical, and technical challenges of AI in cybersecurity. From red teaming tools to prompt privacy, this Q&amp;A session delivers candid insights and actionable advice for professionals navigating the AI-infused threat landscape.</p><p>🧠 Topics Covered:</p><ul><li>Open-source tools for LLM red teaming</li><li>Threat modeling AI systems (STRIDE methodology)</li><li>Hallucination rates in frontier vs. local models</li><li>Prompt privacy: what’s stored, what’s shared</li><li>Should red teamers disclose AI usage?</li><li>Human-in-the-loop: AI-generated deliverables</li><li>Whether you're a pentester, SOC analyst, or just curious about how AI is reshaping offensive security, this episode is packed with expert perspectives and practical takeaways.</li></ul><p><br>About the Panel:<br>Brian Fehrman, Derek Banks, Joff Thyer</p><p><br></p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:14) - Recommended Tools for LLM Red Teaming</li>
<li>(06:12) - Threat Modeling AI Systems</li>
<li>(09:58) - Which Models Hallucinate Most?</li>
<li>(17:13) - Prompt Privacy: What You Should Know</li>
<li>(22:54) - Should Red Teamers Disclose AI Usage?</li>
<li>(27:01) - Final Thoughts &amp; Wrap-Up</li>
</ul>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/accbd8c0/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Azure AI Foundry Guardrails | Episode 27</title>
      <itunes:episode>27</itunes:episode>
      <podcast:episode>27</podcast:episode>
      <itunes:title>Azure AI Foundry Guardrails | Episode 27</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">501a4b40-8b94-45c5-b208-8ac11f340df5</guid>
      <link>https://share.transistor.fm/s/635044c0</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br>Azure AI Foundry Guardrails | Episode 27</p><p>In this episode of BHIS Presents: AI Security Ops, we explore how to configure content filters for AI models using the Azure AI Fooundry guardrails and controls interface. Whether you're building secure demos or deploying models in production, this walkthrough shows how to block unwanted content, enforce policy, and maintain compliance.</p><p>Topics Covered:</p><ul><li> Changing default filters for demo compliance</li><li> Setting up a system prompt and understanding its role</li><li> Adding regex terms to block specific content</li><li> Creating and configuring a custom filter: “tech demo guardrails”</li><li> Input-side filtering: inspecting user text before model access</li><li> Safety vs. security categories in filtering</li><li> Enabling prompt shields for indirect jailbreak detection</li></ul><p><br>This video is ideal for developers, security engineers, and anyone working with AI systems who needs to implement layered defenses and ensure responsible model behavior.</p><p><br>Why This Matters<br>By implementing layered security—block lists, input and output filters—you protect sensitive data, comply with policy, and maintain a safe user experience.</p><p>#AIsecurity #GuardrailsAndControls #ContentFiltering #PromptSecurity #RegexFiltering #BHIS #AIModelSafety #SystemPromptSecurity</p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Introduction &amp; Overview</li>
<li>(01:17) - Changing the Default Content Filter for Demo Compliance</li>
<li>(02:00) - Setting Up a System Prompt and Its Purpose</li>
<li>(04:26) - Adding a New Term (“dogs”) to the Content Filter (Regex Example)</li>
<li>(05:04) - Creating and Configuring a Content Filter Named “Tech Demo Guardrails”</li>
<li>(05:35) - How Input-Side Filters Inspect and Block Unwanted Content</li>
<li>(06:01) - Overview of Safety Categories vs. Security Categories</li>
<li>(07:15) - Enabling Prompt Shields for Indirect Jailbreak Detection (Not Used in Demo)</li>
<li>(08:30) - Summary &amp; Next Steps</li>
</ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br>Azure AI Foundry Guardrails | Episode 27</p><p>In this episode of BHIS Presents: AI Security Ops, we explore how to configure content filters for AI models using the Azure AI Fooundry guardrails and controls interface. Whether you're building secure demos or deploying models in production, this walkthrough shows how to block unwanted content, enforce policy, and maintain compliance.</p><p>Topics Covered:</p><ul><li> Changing default filters for demo compliance</li><li> Setting up a system prompt and understanding its role</li><li> Adding regex terms to block specific content</li><li> Creating and configuring a custom filter: “tech demo guardrails”</li><li> Input-side filtering: inspecting user text before model access</li><li> Safety vs. security categories in filtering</li><li> Enabling prompt shields for indirect jailbreak detection</li></ul><p><br>This video is ideal for developers, security engineers, and anyone working with AI systems who needs to implement layered defenses and ensure responsible model behavior.</p><p><br>Why This Matters<br>By implementing layered security—block lists, input and output filters—you protect sensitive data, comply with policy, and maintain a safe user experience.</p><p>#AIsecurity #GuardrailsAndControls #ContentFiltering #PromptSecurity #RegexFiltering #BHIS #AIModelSafety #SystemPromptSecurity</p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Introduction &amp; Overview</li>
<li>(01:17) - Changing the Default Content Filter for Demo Compliance</li>
<li>(02:00) - Setting Up a System Prompt and Its Purpose</li>
<li>(04:26) - Adding a New Term (“dogs”) to the Content Filter (Regex Example)</li>
<li>(05:04) - Creating and Configuring a Content Filter Named “Tech Demo Guardrails”</li>
<li>(05:35) - How Input-Side Filters Inspect and Block Unwanted Content</li>
<li>(06:01) - Overview of Safety Categories vs. Security Categories</li>
<li>(07:15) - Enabling Prompt Shields for Indirect Jailbreak Detection (Not Used in Demo)</li>
<li>(08:30) - Summary &amp; Next Steps</li>
</ul>]]>
      </content:encoded>
      <pubDate>Thu, 30 Oct 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/635044c0/c6476a84.mp3" length="14883456" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/ATOa4VGbVew2WzvsMpIF8LQOCF7keV9qC_v56RO8o-8/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84NTU3/MTI3MzI2ODQxOTFk/OTRiYzc0ZWUzYjAw/NjBiMi5wbmc.jpg"/>
      <itunes:duration>922</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br>Azure AI Foundry Guardrails | Episode 27</p><p>In this episode of BHIS Presents: AI Security Ops, we explore how to configure content filters for AI models using the Azure AI Fooundry guardrails and controls interface. Whether you're building secure demos or deploying models in production, this walkthrough shows how to block unwanted content, enforce policy, and maintain compliance.</p><p>Topics Covered:</p><ul><li> Changing default filters for demo compliance</li><li> Setting up a system prompt and understanding its role</li><li> Adding regex terms to block specific content</li><li> Creating and configuring a custom filter: “tech demo guardrails”</li><li> Input-side filtering: inspecting user text before model access</li><li> Safety vs. security categories in filtering</li><li> Enabling prompt shields for indirect jailbreak detection</li></ul><p><br>This video is ideal for developers, security engineers, and anyone working with AI systems who needs to implement layered defenses and ensure responsible model behavior.</p><p><br>Why This Matters<br>By implementing layered security—block lists, input and output filters—you protect sensitive data, comply with policy, and maintain a safe user experience.</p><p>#AIsecurity #GuardrailsAndControls #ContentFiltering #PromptSecurity #RegexFiltering #BHIS #AIModelSafety #SystemPromptSecurity</p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Introduction &amp; Overview</li>
<li>(01:17) - Changing the Default Content Filter for Demo Compliance</li>
<li>(02:00) - Setting Up a System Prompt and Its Purpose</li>
<li>(04:26) - Adding a New Term (“dogs”) to the Content Filter (Regex Example)</li>
<li>(05:04) - Creating and Configuring a Content Filter Named “Tech Demo Guardrails”</li>
<li>(05:35) - How Input-Side Filters Inspect and Block Unwanted Content</li>
<li>(06:01) - Overview of Safety Categories vs. Security Categories</li>
<li>(07:15) - Enabling Prompt Shields for Indirect Jailbreak Detection (Not Used in Demo)</li>
<li>(08:30) - Summary &amp; Next Steps</li>
</ul>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/635044c0/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Questions from the Community | Episode 26</title>
      <itunes:episode>26</itunes:episode>
      <podcast:episode>26</podcast:episode>
      <itunes:title>Questions from the Community | Episode 26</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">98f2d43e-246a-46dc-a1d6-3b1522e54874</guid>
      <link>https://share.transistor.fm/s/2079ee77</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Questions from the Community | Episode 26<br>In this community-driven episode of BHIS Presents: AI Security Ops, the panel answers real questions from viewers about AI security, privacy, and risk. Featuring Brian Fehrman, Bronwen Aker, Jack Verrier, and Joff Thyer, the team dives into everything from guardrails and hallucinations to GDPR, agentic AI, and how to stay safe in an AI-saturated world.</p><p>💬 Topics include:</p><ul><li>Are guardrails enough to protect sensitive prompts?</li><li>What’s the difference between hallucination and confabulation?</li><li>How does AI intersect with GDPR and the right to be forgotten?</li><li>What does it mean to “stay safe” when using AI?</li><li>How is securing AI different from traditional software?</li></ul><p><br>Whether you're a red teamer, SOC analyst, or just trying to navigate the AI landscape, this episode offers practical insights and thoughtful perspectives from seasoned security professionals.</p><p>Panelists:<br>🔹 Brian Fehrman<br>🔹 Bronwen Aker<br>🔹 Jack Verrier<br>🔹 Joff Thyer<br>#AIsecurity #Cybersecurity #PromptInjection #LLMs #BHIS #AIprivacy #AgenticAI #AIandGDPR</p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Panel Welcome</li>
<li>(01:22) - Are Guardrails Enough to Protect System Prompts?</li>
<li>(09:54) - Explaining Hallucination vs. Confabulation</li>
<li>(20:09) - AI and GDPR: The Right to Be Forgotten?</li>
<li>(23:49) - How Do We Stay Safe Using AI?</li>
<li>(32:26) - Securing AI vs. Traditional Software</li>
<li>(37:18) - Final Thoughts &amp; Wrap-Up</li>
</ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Questions from the Community | Episode 26<br>In this community-driven episode of BHIS Presents: AI Security Ops, the panel answers real questions from viewers about AI security, privacy, and risk. Featuring Brian Fehrman, Bronwen Aker, Jack Verrier, and Joff Thyer, the team dives into everything from guardrails and hallucinations to GDPR, agentic AI, and how to stay safe in an AI-saturated world.</p><p>💬 Topics include:</p><ul><li>Are guardrails enough to protect sensitive prompts?</li><li>What’s the difference between hallucination and confabulation?</li><li>How does AI intersect with GDPR and the right to be forgotten?</li><li>What does it mean to “stay safe” when using AI?</li><li>How is securing AI different from traditional software?</li></ul><p><br>Whether you're a red teamer, SOC analyst, or just trying to navigate the AI landscape, this episode offers practical insights and thoughtful perspectives from seasoned security professionals.</p><p>Panelists:<br>🔹 Brian Fehrman<br>🔹 Bronwen Aker<br>🔹 Jack Verrier<br>🔹 Joff Thyer<br>#AIsecurity #Cybersecurity #PromptInjection #LLMs #BHIS #AIprivacy #AgenticAI #AIandGDPR</p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Panel Welcome</li>
<li>(01:22) - Are Guardrails Enough to Protect System Prompts?</li>
<li>(09:54) - Explaining Hallucination vs. Confabulation</li>
<li>(20:09) - AI and GDPR: The Right to Be Forgotten?</li>
<li>(23:49) - How Do We Stay Safe Using AI?</li>
<li>(32:26) - Securing AI vs. Traditional Software</li>
<li>(37:18) - Final Thoughts &amp; Wrap-Up</li>
</ul>]]>
      </content:encoded>
      <pubDate>Thu, 23 Oct 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/2079ee77/3af54501.mp3" length="36395151" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/VKg7J7ANNxzfwI31Yf7whvlwAomSg8lcwJ0vV9yaH2o/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yZGVi/NTM1MGIwMzQzN2Q3/NTFkODYzYTIyMDY2/NzE5OC5wbmc.jpg"/>
      <itunes:duration>2267</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Questions from the Community | Episode 26<br>In this community-driven episode of BHIS Presents: AI Security Ops, the panel answers real questions from viewers about AI security, privacy, and risk. Featuring Brian Fehrman, Bronwen Aker, Jack Verrier, and Joff Thyer, the team dives into everything from guardrails and hallucinations to GDPR, agentic AI, and how to stay safe in an AI-saturated world.</p><p>💬 Topics include:</p><ul><li>Are guardrails enough to protect sensitive prompts?</li><li>What’s the difference between hallucination and confabulation?</li><li>How does AI intersect with GDPR and the right to be forgotten?</li><li>What does it mean to “stay safe” when using AI?</li><li>How is securing AI different from traditional software?</li></ul><p><br>Whether you're a red teamer, SOC analyst, or just trying to navigate the AI landscape, this episode offers practical insights and thoughtful perspectives from seasoned security professionals.</p><p>Panelists:<br>🔹 Brian Fehrman<br>🔹 Bronwen Aker<br>🔹 Jack Verrier<br>🔹 Joff Thyer<br>#AIsecurity #Cybersecurity #PromptInjection #LLMs #BHIS #AIprivacy #AgenticAI #AIandGDPR</p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Panel Welcome</li>
<li>(01:22) - Are Guardrails Enough to Protect System Prompts?</li>
<li>(09:54) - Explaining Hallucination vs. Confabulation</li>
<li>(20:09) - AI and GDPR: The Right to Be Forgotten?</li>
<li>(23:49) - How Do We Stay Safe Using AI?</li>
<li>(32:26) - Securing AI vs. Traditional Software</li>
<li>(37:18) - Final Thoughts &amp; Wrap-Up</li>
</ul>]]>
      </itunes:summary>
      <itunes:keywords>AI security,Cybersecurity,Promp tInjection,LLMs,BHIS,AI privacy,Agentic AI,AI and GDPR</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/jack-verrier/" img="https://img.transistorcdn.com/DFiCEW-KF0gwVEee9Vwff4gGYHLI6knq0RtArJIht74/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84MDYx/MmYyNTcwYzZiZDIy/OTU1MWFhNTIxYTg4/OWU2Mi5qcGc.jpg">Jack Verrier</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/2079ee77/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>AI News Stories | Episode 25</title>
      <itunes:episode>25</itunes:episode>
      <podcast:episode>25</podcast:episode>
      <itunes:title>AI News Stories | Episode 25</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5254f0ca-48dd-41a6-b9bd-233b43990873</guid>
      <link>https://share.transistor.fm/s/83cb0b41</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br>AI News Stories | Episode 25<br>In this episode of BHIS Presents: AI Security Ops, the panel dives into the biggest AI cybersecurity headlines from late September 2025. From government regulation to zero-click exploits, we unpack the risks, trends, and implications for security professionals navigating the AI-powered future.</p><p>🧠 Topics Covered:</p><ul><li>Government oversight of advanced AI systems</li><li>Accenture’s massive layoffs amid AI pivot</li><li>ShadowLeak: zero-click vulnerability in ChatGPT agents</li><li>Malicious MCP server stealing emails</li><li>AI in the SOC: benefits and risks</li><li>Attackers using AI to scale ransomware and social engineering</li></ul><p><br>Whether you're a red teamer, SOC analyst, or just trying to stay ahead of AI threats, this episode delivers sharp insights and practical takeaways.</p><p><br></p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(00:45) - Senators Introduce AI Risk Evaluation Act</li>
<li>(09:48) - Accenture Layoffs &amp; AI Restructuring</li>
<li>(16:17) - ShadowLeak: Zero-Click Vulnerability in ChatGPT</li>
<li>(20:07) - Malicious MCP Server &amp; Supply Chain Risks</li>
<li>(26:27) - AI in the SOC: Alert Triage &amp; Analyst Burnout</li>
<li>(30:10) - Final Thoughts: AI’s Role in Security Operations</li>
</ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br>AI News Stories | Episode 25<br>In this episode of BHIS Presents: AI Security Ops, the panel dives into the biggest AI cybersecurity headlines from late September 2025. From government regulation to zero-click exploits, we unpack the risks, trends, and implications for security professionals navigating the AI-powered future.</p><p>🧠 Topics Covered:</p><ul><li>Government oversight of advanced AI systems</li><li>Accenture’s massive layoffs amid AI pivot</li><li>ShadowLeak: zero-click vulnerability in ChatGPT agents</li><li>Malicious MCP server stealing emails</li><li>AI in the SOC: benefits and risks</li><li>Attackers using AI to scale ransomware and social engineering</li></ul><p><br>Whether you're a red teamer, SOC analyst, or just trying to stay ahead of AI threats, this episode delivers sharp insights and practical takeaways.</p><p><br></p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(00:45) - Senators Introduce AI Risk Evaluation Act</li>
<li>(09:48) - Accenture Layoffs &amp; AI Restructuring</li>
<li>(16:17) - ShadowLeak: Zero-Click Vulnerability in ChatGPT</li>
<li>(20:07) - Malicious MCP Server &amp; Supply Chain Risks</li>
<li>(26:27) - AI in the SOC: Alert Triage &amp; Analyst Burnout</li>
<li>(30:10) - Final Thoughts: AI’s Role in Security Operations</li>
</ul>]]>
      </content:encoded>
      <pubDate>Thu, 16 Oct 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/83cb0b41/d8f4a1d0.mp3" length="30553101" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/1GOCp97V_RaJW9naQsnK9iQIlKZNSVoO3eLX0B8iWc0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYmVm/NWFhYjdjMjRhOGE3/ZGI3MTBiNDNlNWYw/MDEwNy5wbmc.jpg"/>
      <itunes:duration>1902</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br>AI News Stories | Episode 25<br>In this episode of BHIS Presents: AI Security Ops, the panel dives into the biggest AI cybersecurity headlines from late September 2025. From government regulation to zero-click exploits, we unpack the risks, trends, and implications for security professionals navigating the AI-powered future.</p><p>🧠 Topics Covered:</p><ul><li>Government oversight of advanced AI systems</li><li>Accenture’s massive layoffs amid AI pivot</li><li>ShadowLeak: zero-click vulnerability in ChatGPT agents</li><li>Malicious MCP server stealing emails</li><li>AI in the SOC: benefits and risks</li><li>Attackers using AI to scale ransomware and social engineering</li></ul><p><br>Whether you're a red teamer, SOC analyst, or just trying to stay ahead of AI threats, this episode delivers sharp insights and practical takeaways.</p><p><br></p><p>Brought to you by Black Hills Information Security </p><p><a href="https://www.blackhillsinfosec.com">https://www.blackhillsinfosec.com</a></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(00:45) - Senators Introduce AI Risk Evaluation Act</li>
<li>(09:48) - Accenture Layoffs &amp; AI Restructuring</li>
<li>(16:17) - ShadowLeak: Zero-Click Vulnerability in ChatGPT</li>
<li>(20:07) - Malicious MCP Server &amp; Supply Chain Risks</li>
<li>(26:27) - AI in the SOC: Alert Triage &amp; Analyst Burnout</li>
<li>(30:10) - Final Thoughts: AI’s Role in Security Operations</li>
</ul>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/83cb0b41/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Model Extraction Attacks | Episode 24</title>
      <itunes:episode>24</itunes:episode>
      <podcast:episode>24</podcast:episode>
      <itunes:title>Model Extraction Attacks | Episode 24</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">37728fa2-1486-4248-b483-2f9f73ef367e</guid>
      <link>https://share.transistor.fm/s/e3bc86d6</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Model Extraction Attacks | Episode 24<br>In this solo episode of BHIS Presents: AI Security Ops, Brian Fehrman explores the stealthy world of Model Extraction Attacks—where hackers clone your AI model without ever touching your code. Learn how adversaries can reverse-engineer your multimillion-dollar model simply by querying its API, and why this threat is more than just academic.</p><p>We break down:<br>- What model extraction is and how it works<br>- Real-world examples like DeepSeek’s alleged distillation of OpenAI models<br>- The risks to intellectual property, security, and sensitive data<br>- Defensive strategies including API throttling, output limiting, watermarking, and honeypots<br>- Legal and ethical questions around benchmarking vs. theft</p><p>Whether you're deploying LLMs or classification models, this episode will help you understand how attackers replicate model behavior—and what you can do to stop them.<br>If your AI is accessible, someone’s probably trying to copy it.</p><p><br>#AIsecurity #ModelExtractionAttacks #Cybersecurity #BHIS #LLMsecurity #AIthreats</p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:19) - What Is a Model Extraction Attack?</li>
<li>(02:45) - Why Training a Model Is So Expensive</li>
<li>(05:42) - How Model Extraction Works</li>
<li>(07:11) - Why It Matters: IP, Security &amp; Data Risks</li>
<li>(10:25) - What Makes Extraction Easier or Harder</li>
<li>(12:54) - Defenses: Monitoring, Watermarking &amp; Privacy</li>
<li>(16:04) - What to Do If You Suspect an Attack</li>
<li>(16:29) - Legal &amp; Ethical Questions Around Model Theft</li>
<li>(19:30) - Final Thoughts &amp; Takeaways</li>
</ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Model Extraction Attacks | Episode 24<br>In this solo episode of BHIS Presents: AI Security Ops, Brian Fehrman explores the stealthy world of Model Extraction Attacks—where hackers clone your AI model without ever touching your code. Learn how adversaries can reverse-engineer your multimillion-dollar model simply by querying its API, and why this threat is more than just academic.</p><p>We break down:<br>- What model extraction is and how it works<br>- Real-world examples like DeepSeek’s alleged distillation of OpenAI models<br>- The risks to intellectual property, security, and sensitive data<br>- Defensive strategies including API throttling, output limiting, watermarking, and honeypots<br>- Legal and ethical questions around benchmarking vs. theft</p><p>Whether you're deploying LLMs or classification models, this episode will help you understand how attackers replicate model behavior—and what you can do to stop them.<br>If your AI is accessible, someone’s probably trying to copy it.</p><p><br>#AIsecurity #ModelExtractionAttacks #Cybersecurity #BHIS #LLMsecurity #AIthreats</p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:19) - What Is a Model Extraction Attack?</li>
<li>(02:45) - Why Training a Model Is So Expensive</li>
<li>(05:42) - How Model Extraction Works</li>
<li>(07:11) - Why It Matters: IP, Security &amp; Data Risks</li>
<li>(10:25) - What Makes Extraction Easier or Harder</li>
<li>(12:54) - Defenses: Monitoring, Watermarking &amp; Privacy</li>
<li>(16:04) - What to Do If You Suspect an Attack</li>
<li>(16:29) - Legal &amp; Ethical Questions Around Model Theft</li>
<li>(19:30) - Final Thoughts &amp; Takeaways</li>
</ul>]]>
      </content:encoded>
      <pubDate>Sat, 11 Oct 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/e3bc86d6/69f83950.mp3" length="19281440" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/opG111ouz_1jOX0PwNYL3CJCTZhppRUETqlZ58yq1aM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zM2Q1/ZGRjN2RhMTIzNTNh/M2M0NzE5ODcxZmM3/MDFlYi5wbmc.jpg"/>
      <itunes:duration>1198</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Model Extraction Attacks | Episode 24<br>In this solo episode of BHIS Presents: AI Security Ops, Brian Fehrman explores the stealthy world of Model Extraction Attacks—where hackers clone your AI model without ever touching your code. Learn how adversaries can reverse-engineer your multimillion-dollar model simply by querying its API, and why this threat is more than just academic.</p><p>We break down:<br>- What model extraction is and how it works<br>- Real-world examples like DeepSeek’s alleged distillation of OpenAI models<br>- The risks to intellectual property, security, and sensitive data<br>- Defensive strategies including API throttling, output limiting, watermarking, and honeypots<br>- Legal and ethical questions around benchmarking vs. theft</p><p>Whether you're deploying LLMs or classification models, this episode will help you understand how attackers replicate model behavior—and what you can do to stop them.<br>If your AI is accessible, someone’s probably trying to copy it.</p><p><br>#AIsecurity #ModelExtractionAttacks #Cybersecurity #BHIS #LLMsecurity #AIthreats</p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro &amp; Sponsor Shoutouts</li>
<li>(01:19) - What Is a Model Extraction Attack?</li>
<li>(02:45) - Why Training a Model Is So Expensive</li>
<li>(05:42) - How Model Extraction Works</li>
<li>(07:11) - Why It Matters: IP, Security &amp; Data Risks</li>
<li>(10:25) - What Makes Extraction Easier or Harder</li>
<li>(12:54) - Defenses: Monitoring, Watermarking &amp; Privacy</li>
<li>(16:04) - What to Do If You Suspect an Attack</li>
<li>(16:29) - Legal &amp; Ethical Questions Around Model Theft</li>
<li>(19:30) - Final Thoughts &amp; Takeaways</li>
</ul>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/e3bc86d6/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>News of the Month | Episode 23</title>
      <itunes:episode>23</itunes:episode>
      <podcast:episode>23</podcast:episode>
      <itunes:title>News of the Month | Episode 23</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">59a23c45-3115-4825-a74c-9ee1578f1764</guid>
      <link>https://share.transistor.fm/s/103f2eff</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p><br></p><p><br>In this episode of AI Security Ops, Brian Fehrman and Joff Thyer dive into the latest AI news of the month, exploring how rapidly evolving technologies are reshaping cybersecurity.<br>Topics covered include:<br> - How AI is changing cybersecurity monitoring<br> - Expanding from email to Slack, Teams, and other chat platforms<br> - Addressing insider threats and phishing campaigns in new channels<br> - The rapid pace of AI innovation and industry trends<br> - Why organizations should prioritize AI security assessments<br> - Real-world risks and opportunities in the AI landscape</p><p>Stay ahead in the AI race with Black Hills Information Security as we cover real-world risks, opportunities, and the latest developments in the AI landscape.</p><p><br>///News Stories This Episode:</p><p>1. AI-Powered Villager Pen Testing Tool Hits 11,000 PyPI Downloads Amid Abuse Concerns<br><a href="https://thehackernews.com/2025/09/ai-powered-villager-pen-testing-tool.html">https://thehackernews.com/2025/09/ai-powered-villager-pen-testing-tool.html</a></p><p>2. CrowdStrike and Meta Just Made Evaluating AI Security Tools Easier<br><a href="https://www.zdnet.com/article/crowdstrike-and-meta-just-made-evaluating-ai-security-tools-easier/">https://www.zdnet.com/article/crowdstrike-and-meta-just-made-evaluating-ai-security-tools-easier/</a></p><p>3. Check Point Acquires Lakera to Deliver End-to-End AI Security for Enterprises<br><a href="https://www.checkpoint.com/press-releases/check-point-acquires-lakera-to-deliver-end-to-end-ai-security-for-enterprises/">https://www.checkpoint.com/press-releases/check-point-acquires-lakera-to-deliver-end-to-end-ai-security-for-enterprises/</a></p><p>4. Proofpoint Offers AI Agents to Monitor Human-Based Communications<br><a href="https://www.msspalert.com/news/proofpoint-offers-ai-agents-to-monitor-human-based-communications">https://www.msspalert.com/news/proofpoint-offers-ai-agents-to-monitor-human-based-communications</a></p><p>5. EvilAI Malware Campaign Exploits AI-Generated Code to Breach Global Critical Sectors<br><a href="https://industrialcyber.co/ransomware/evilai-malware-campaign-exploits-ai-generated-code-to-breach-global-critical-sectors/">https://industrialcyber.co/ransomware/evilai-malware-campaign-exploits-ai-generated-code-to-breach-global-critical-sectors/</a><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p><br></p><p><br>In this episode of AI Security Ops, Brian Fehrman and Joff Thyer dive into the latest AI news of the month, exploring how rapidly evolving technologies are reshaping cybersecurity.<br>Topics covered include:<br> - How AI is changing cybersecurity monitoring<br> - Expanding from email to Slack, Teams, and other chat platforms<br> - Addressing insider threats and phishing campaigns in new channels<br> - The rapid pace of AI innovation and industry trends<br> - Why organizations should prioritize AI security assessments<br> - Real-world risks and opportunities in the AI landscape</p><p>Stay ahead in the AI race with Black Hills Information Security as we cover real-world risks, opportunities, and the latest developments in the AI landscape.</p><p><br>///News Stories This Episode:</p><p>1. AI-Powered Villager Pen Testing Tool Hits 11,000 PyPI Downloads Amid Abuse Concerns<br><a href="https://thehackernews.com/2025/09/ai-powered-villager-pen-testing-tool.html">https://thehackernews.com/2025/09/ai-powered-villager-pen-testing-tool.html</a></p><p>2. CrowdStrike and Meta Just Made Evaluating AI Security Tools Easier<br><a href="https://www.zdnet.com/article/crowdstrike-and-meta-just-made-evaluating-ai-security-tools-easier/">https://www.zdnet.com/article/crowdstrike-and-meta-just-made-evaluating-ai-security-tools-easier/</a></p><p>3. Check Point Acquires Lakera to Deliver End-to-End AI Security for Enterprises<br><a href="https://www.checkpoint.com/press-releases/check-point-acquires-lakera-to-deliver-end-to-end-ai-security-for-enterprises/">https://www.checkpoint.com/press-releases/check-point-acquires-lakera-to-deliver-end-to-end-ai-security-for-enterprises/</a></p><p>4. Proofpoint Offers AI Agents to Monitor Human-Based Communications<br><a href="https://www.msspalert.com/news/proofpoint-offers-ai-agents-to-monitor-human-based-communications">https://www.msspalert.com/news/proofpoint-offers-ai-agents-to-monitor-human-based-communications</a></p><p>5. EvilAI Malware Campaign Exploits AI-Generated Code to Breach Global Critical Sectors<br><a href="https://industrialcyber.co/ransomware/evilai-malware-campaign-exploits-ai-generated-code-to-breach-global-critical-sectors/">https://industrialcyber.co/ransomware/evilai-malware-campaign-exploits-ai-generated-code-to-breach-global-critical-sectors/</a><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </content:encoded>
      <pubDate>Thu, 02 Oct 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/103f2eff/6dab8740.mp3" length="33115872" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Rk__hO7Kb0t27KNrm_TOvJYs2D3qB-AGwTXzDbD-mFU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85NGEw/NmMzOWY4YjYwYjNm/Y2RhNTEwZDFjOGE4/ODhiNS5wbmc.jpg"/>
      <itunes:duration>2062</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p><br></p><p><br>In this episode of AI Security Ops, Brian Fehrman and Joff Thyer dive into the latest AI news of the month, exploring how rapidly evolving technologies are reshaping cybersecurity.<br>Topics covered include:<br> - How AI is changing cybersecurity monitoring<br> - Expanding from email to Slack, Teams, and other chat platforms<br> - Addressing insider threats and phishing campaigns in new channels<br> - The rapid pace of AI innovation and industry trends<br> - Why organizations should prioritize AI security assessments<br> - Real-world risks and opportunities in the AI landscape</p><p>Stay ahead in the AI race with Black Hills Information Security as we cover real-world risks, opportunities, and the latest developments in the AI landscape.</p><p><br>///News Stories This Episode:</p><p>1. AI-Powered Villager Pen Testing Tool Hits 11,000 PyPI Downloads Amid Abuse Concerns<br><a href="https://thehackernews.com/2025/09/ai-powered-villager-pen-testing-tool.html">https://thehackernews.com/2025/09/ai-powered-villager-pen-testing-tool.html</a></p><p>2. CrowdStrike and Meta Just Made Evaluating AI Security Tools Easier<br><a href="https://www.zdnet.com/article/crowdstrike-and-meta-just-made-evaluating-ai-security-tools-easier/">https://www.zdnet.com/article/crowdstrike-and-meta-just-made-evaluating-ai-security-tools-easier/</a></p><p>3. Check Point Acquires Lakera to Deliver End-to-End AI Security for Enterprises<br><a href="https://www.checkpoint.com/press-releases/check-point-acquires-lakera-to-deliver-end-to-end-ai-security-for-enterprises/">https://www.checkpoint.com/press-releases/check-point-acquires-lakera-to-deliver-end-to-end-ai-security-for-enterprises/</a></p><p>4. Proofpoint Offers AI Agents to Monitor Human-Based Communications<br><a href="https://www.msspalert.com/news/proofpoint-offers-ai-agents-to-monitor-human-based-communications">https://www.msspalert.com/news/proofpoint-offers-ai-agents-to-monitor-human-based-communications</a></p><p>5. EvilAI Malware Campaign Exploits AI-Generated Code to Breach Global Critical Sectors<br><a href="https://industrialcyber.co/ransomware/evilai-malware-campaign-exploits-ai-generated-code-to-breach-global-critical-sectors/">https://industrialcyber.co/ransomware/evilai-malware-campaign-exploits-ai-generated-code-to-breach-global-critical-sectors/</a><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
    </item>
    <item>
      <title>Insider Threat 2.0 -  Prompt Leaks &amp; Shadow AI | Episode 22</title>
      <itunes:episode>22</itunes:episode>
      <podcast:episode>22</podcast:episode>
      <itunes:title>Insider Threat 2.0 -  Prompt Leaks &amp; Shadow AI | Episode 22</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">66b9dc5d-877e-4841-9e4a-a83f91f2a087</guid>
      <link>https://share.transistor.fm/s/7ad599c6</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p><br></p><p>Insider Threat 2.0 -  Prompt Leaks &amp; Shadow AI | Episode 22</p><p>In this episode of BHIS Presents AI Security Ops, we dive into Insider Threat 2.0: Prompt Leaks &amp; Shadow AI. The panel explores the hidden risks of employees pasting sensitive data into public AI tools, the rise of unauthorized “Shadow AI” in organizations, and how policies—or lack thereof—can expose critical information. Learn why free AI services often make you the product, how prompt history creates data leakage risks, and why companies must establish clear AI usage guidelines. We also cover practical defenses, from enterprise AI accounts to cultural awareness training, and draw parallels to past IT challenges like Shadow IT and rogue wireless.<br>If you’re concerned about AI security, data leakage, or safe adoption of large language models, this discussion will help you navigate the risks and protect your organization.</p><p>#AIsecurity #PromptInjection #ShadowAI #Cybersecurity #BHIS</p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p><br></p><p>Insider Threat 2.0 -  Prompt Leaks &amp; Shadow AI | Episode 22</p><p>In this episode of BHIS Presents AI Security Ops, we dive into Insider Threat 2.0: Prompt Leaks &amp; Shadow AI. The panel explores the hidden risks of employees pasting sensitive data into public AI tools, the rise of unauthorized “Shadow AI” in organizations, and how policies—or lack thereof—can expose critical information. Learn why free AI services often make you the product, how prompt history creates data leakage risks, and why companies must establish clear AI usage guidelines. We also cover practical defenses, from enterprise AI accounts to cultural awareness training, and draw parallels to past IT challenges like Shadow IT and rogue wireless.<br>If you’re concerned about AI security, data leakage, or safe adoption of large language models, this discussion will help you navigate the risks and protect your organization.</p><p>#AIsecurity #PromptInjection #ShadowAI #Cybersecurity #BHIS</p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </content:encoded>
      <pubDate>Thu, 25 Sep 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/7ad599c6/be04d8bc.mp3" length="24971806" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/DQRvtY2H3a07uLO_jGrs9W6aYnP394gRuObKjb8sfHk/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85NmI2/MjQyYTM5OTk3ZTlh/MGVhZTczZDI5MzI5/ZWZhNS5wbmc.jpg"/>
      <itunes:duration>1558</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p><br></p><p>Insider Threat 2.0 -  Prompt Leaks &amp; Shadow AI | Episode 22</p><p>In this episode of BHIS Presents AI Security Ops, we dive into Insider Threat 2.0: Prompt Leaks &amp; Shadow AI. The panel explores the hidden risks of employees pasting sensitive data into public AI tools, the rise of unauthorized “Shadow AI” in organizations, and how policies—or lack thereof—can expose critical information. Learn why free AI services often make you the product, how prompt history creates data leakage risks, and why companies must establish clear AI usage guidelines. We also cover practical defenses, from enterprise AI accounts to cultural awareness training, and draw parallels to past IT challenges like Shadow IT and rogue wireless.<br>If you’re concerned about AI security, data leakage, or safe adoption of large language models, this discussion will help you navigate the risks and protect your organization.</p><p>#AIsecurity #PromptInjection #ShadowAI #Cybersecurity #BHIS</p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
    </item>
    <item>
      <title>Deepfakes and Fraudulent Interviews In Remote Hiring | Episode 21</title>
      <itunes:episode>21</itunes:episode>
      <podcast:episode>21</podcast:episode>
      <itunes:title>Deepfakes and Fraudulent Interviews In Remote Hiring | Episode 21</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f59bb414-d6ed-4662-bf1a-2a04bc2bed46</guid>
      <link>https://share.transistor.fm/s/fbc46b5c</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Episode 21 - Deepfakes And Fraudulent Interviews In Remote Hiring</p><p><br></p><p>In this episode of AI Security Ops by Black Hills Information Security, the crew explores the alarming rise of deepfakes and fraudulent interviews in remote hiring. As virtual work expands, cybercriminals are using AI-driven impersonation tactics to pose as job candidates, deceive recruiters, and gain unauthorized access to organizations. Joff, Bronwen Aker, Brian Fehrman, and Derek Banks break down real-world cases, explain the challenges of spotting deepfake job scams, and share actionable strategies to secure hiring processes. Discover the red flags to watch for in virtual interviews, how attackers exploit trust, and why companies must adapt their security awareness in the age of AI.</p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Episode 21 - Deepfakes And Fraudulent Interviews In Remote Hiring</p><p><br></p><p>In this episode of AI Security Ops by Black Hills Information Security, the crew explores the alarming rise of deepfakes and fraudulent interviews in remote hiring. As virtual work expands, cybercriminals are using AI-driven impersonation tactics to pose as job candidates, deceive recruiters, and gain unauthorized access to organizations. Joff, Bronwen Aker, Brian Fehrman, and Derek Banks break down real-world cases, explain the challenges of spotting deepfake job scams, and share actionable strategies to secure hiring processes. Discover the red flags to watch for in virtual interviews, how attackers exploit trust, and why companies must adapt their security awareness in the age of AI.</p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </content:encoded>
      <pubDate>Thu, 18 Sep 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/fbc46b5c/7e1d46a4.mp3" length="27091244" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/3ryL6f1meQZYeJjjXH-4v03mLfzOZz0M7dBkQHb-D3A/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jM2U1/OTdmZGJlOWI3ODdj/OTZjMWZhNmY1NjZi/NGQxYS5wbmc.jpg"/>
      <itunes:duration>1686</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Episode 21 - Deepfakes And Fraudulent Interviews In Remote Hiring</p><p><br></p><p>In this episode of AI Security Ops by Black Hills Information Security, the crew explores the alarming rise of deepfakes and fraudulent interviews in remote hiring. As virtual work expands, cybercriminals are using AI-driven impersonation tactics to pose as job candidates, deceive recruiters, and gain unauthorized access to organizations. Joff, Bronwen Aker, Brian Fehrman, and Derek Banks break down real-world cases, explain the challenges of spotting deepfake job scams, and share actionable strategies to secure hiring processes. Discover the red flags to watch for in virtual interviews, how attackers exploit trust, and why companies must adapt their security awareness in the age of AI.</p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
    </item>
    <item>
      <title>The Hallucination Problem | Episode 20</title>
      <itunes:episode>20</itunes:episode>
      <podcast:episode>20</podcast:episode>
      <itunes:title>The Hallucination Problem | Episode 20</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">90ad3b22-8182-421c-8800-bc61dc1a0f84</guid>
      <link>https://share.transistor.fm/s/09f2b7be</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Episode 20 - The Hallucination Problem</p><p><br></p><p>In this episode of AI Security Ops, Joff Thyer and Brian Fehrman from Black Hills Information Security dive into the hallucination problem in AI large language models and generative AI. </p><p><br></p><p>They explain what hallucinations are, why they happen, and the risks they create in real-world AI deployments. The discussion covers security implications, practical examples, and strategies organizations can use to mitigate these issues through stronger design, monitoring, and testing. </p><p><br></p><p>A must-watch for cybersecurity professionals, AI researchers, and anyone curious about the limitations and challenges of modern AI systems.</p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Episode 20 - The Hallucination Problem</p><p><br></p><p>In this episode of AI Security Ops, Joff Thyer and Brian Fehrman from Black Hills Information Security dive into the hallucination problem in AI large language models and generative AI. </p><p><br></p><p>They explain what hallucinations are, why they happen, and the risks they create in real-world AI deployments. The discussion covers security implications, practical examples, and strategies organizations can use to mitigate these issues through stronger design, monitoring, and testing. </p><p><br></p><p>A must-watch for cybersecurity professionals, AI researchers, and anyone curious about the limitations and challenges of modern AI systems.</p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </content:encoded>
      <pubDate>Thu, 11 Sep 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/09f2b7be/a2c6b848.mp3" length="25961160" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/E_YILikAlP8VqBntyBH3Rgf-YCRQn76iTy_73rSpdCc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82MWYz/MjcxYjFmZTRjNTBk/Y2VlYjExNmI0ZWRi/ZDc3NS5wbmc.jpg"/>
      <itunes:duration>1615</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Episode 20 - The Hallucination Problem</p><p><br></p><p>In this episode of AI Security Ops, Joff Thyer and Brian Fehrman from Black Hills Information Security dive into the hallucination problem in AI large language models and generative AI. </p><p><br></p><p>They explain what hallucinations are, why they happen, and the risks they create in real-world AI deployments. The discussion covers security implications, practical examples, and strategies organizations can use to mitigate these issues through stronger design, monitoring, and testing. </p><p><br></p><p>A must-watch for cybersecurity professionals, AI researchers, and anyone curious about the limitations and challenges of modern AI systems.</p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker - <a href="http://blackhillsinfosec.com/team/bronwen-aker/">http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
    </item>
    <item>
      <title>News of the Month | Episode 19</title>
      <itunes:episode>19</itunes:episode>
      <podcast:episode>19</podcast:episode>
      <itunes:title>News of the Month | Episode 19</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bc7a768b-208d-4c65-bde2-489164c0371d</guid>
      <link>https://share.transistor.fm/s/a7955d77</link>
      <description>
        <![CDATA[<p>Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p>AI News of the Month | Episode 19</p><p>In Episode 19,Brianand Derek cover a zero-click indirect prompt injection attack against ChatGPT connectors and seemingly innocent Google Calendar events that hijack smart homes via Gemini, with possible consequences for the power grid.</p><p>They'll discuss the impact of Microsoft patching a critical Azure OpenAI SSRF vulnerability and go over new NIST AI security standards, IBM’s study on shadow AI and breach costs, OpenAI’s response to chat indexing leaks, and a malicious VS Code extension that stole $500K in cryptocurrency. </p><p>#AI #CyberSecurity #PromptInjection #Malware #InfoSec #AIThreats #Hacking #GenerativeAI #Deepfakes #LLM #ShadowAI</p><ul><li>“Poisoned doc” exfiltrates data via ChatGPT Connectors (AgentFlayer) — Aug 6, 2025<ul><li>Primary: <a href="https://www.wired.com/story/poisoned-document-could-leak-secret-data-chatgpt/">https://www.wired.com/story/poisoned-document-could-leak-secret-data-chatgpt/</a></li><li>Tech write-up: <a href="https://labs.zenity.io/p/agentflayer-chatgpt-connectors-0click-attack-5b41">https://labs.zenity.io/p/agentflayer-chatgpt-connectors-0click-attack-5b41</a></li></ul></li></ul><p><br></p><ul><li>Poisoned Google Calendar invite hijacks Gemini to control a smart home — Aug 6–10, 2025<ul><li>Primary: <a href="https://www.wired.com/story/google-gemini-calendar-invite-hijack-smart-home/">https://www.wired.com/story/google-gemini-calendar-invite-hijack-smart-home/</a></li><li>Bug/patch coverage: <a href="https://www.bleepingcomputer.com/news/security/google-calendar-invites-let-researchers-hijack-gemini-to-leak-user-data/">https://www.bleepingcomputer.com/news/security/google-calendar-invites-let-researchers-hijack-gemini-to-leak-user-data/</a></li></ul></li></ul><p><br></p><ul><li>Microsoft August Patch Tuesday adds AI-surface fixes; critical Azure OpenAI vuln (CVE-2025-53767) — Aug 12–13, 2025<ul><li>Release coverage: <a href="https://www.techradar.com/pro/security/microsofts-latest-major-patch-fixes-a-serious-zero-day-flaw-and-a-host-of-other-issues-so-update-now">https://www.techradar.com/pro/security/microsofts-latest-major-patch-fixes-a-serious-zero-day-flaw-and-a-host-of-other-issues-so-update-now</a></li><li>CVE entry: <a href="https://nvd.nist.gov/vuln/detail/CVE-2025-53767">https://nvd.nist.gov/vuln/detail/CVE-2025-53767</a> (NVD)</li><li>Overview: <a href="https://www.tenable.com/blog/microsofts-august-2025-patch-tuesday-addresses-107-cves-cve-2025-53779">https://www.tenable.com/blog/microsofts-august-2025-patch-tuesday-addresses-107-cves-cve-2025-53779</a> (Tenable®)</li></ul></li></ul><p><br></p><ul><li>NIST proposes SP 800-53 “Control Overlays for Securing AI Systems” — Aug 14, 2025<ul><li>Announcement: <a href="https://www.nist.gov/news-events/news/2025/08/nist-releases-control-overlays-securing-ai-systems-concept-paper">https://www.nist.gov/news-events/news/2025/08/nist-releases-control-overlays-securing-ai-systems-concept-paper</a></li><li>Concept paper (PDF): <a href="https://csrc.nist.gov/csrc/media/Projects/cosais/documents/NIST-Overlays-SecuringAI-concept-paper.pdf">https://csrc.nist.gov/csrc/media/Projects/cosais/documents/NIST-Overlays-SecuringAI-concept-paper.pdf</a></li></ul></li></ul><p><br></p><ul><li>IBM 2025 “Cost of a Data Breach”: AI is both breach vector and defender — Jul 30, 2025<ul><li>Press release: <a href="https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications%2C-97-of-which-reported-lacking-proper-ai-access-controls">https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications%2C-97-of-which-reported-lacking-proper-ai-access-controls</a></li><li>Report: https://www.ibm.com/reports/data-breach</li><li>Analysis: <a href="https://venturebeat.com/security/ibm-shadow-ai-breaches-cost-670k-more-97-of-firms-lack-controls/">https://venturebeat.com/security/ibm-shadow-ai-breaches-cost-670k-more-97-of-firms-lack-controls/</a> (VentureBeat)</li></ul></li></ul><p><br></p><ul><li>OpenAI considers encrypting Temporary Chats; privacy clean-ups after search-indexing scare — Aug 18, 2025<ul><li>Interview: <a href="https://www.axios.com/2025/08/18/altman-openai-chatgpt-encrypted-chats">https://www.axios.com/2025/08/18/altman-openai-chatgpt-encrypted-chats</a></li><li>Context: <a href="https://arstechnica.com/tech-policy/2025/08/chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results/">https://arstechnica.com/tech-policy/2025/08/chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results/</a></li><li>Help center (retention): <a href="https://help.openai.com/en/articles/8914046-temporary-chat-faq">https://help.openai.com/en/articles/8914046-temporary-chat-faq</a></li></ul></li></ul><p><br></p><ul><li>Fake VS Code extension for Cursor leads to $500K crypto theft — July 11, 2025<ul><li>Primary: <a href="https://www.scworld.com/news/fake-visual-studio-code-extension-for-cursor-led-to-500k-theft%20SC%20Media">https://www.scworld.com/news/fake-visual-studio-code-extension-for-cursor-led-to-500k-theft SC Media</a></li><li>Research write-up: <a href="https://securelist.com/open-source-package-for-cursor-ai-turned-into-a-crypto-heist/116908/Securelist">https://securelist.com/open-source-package-for-cursor-ai-turned-into-a-crypto-heist/116908/Securelist</a></li><li>Coverage: <a href="https://www.bleepingcomputer.com/news/security/malicious-vscode-extension-in-cursor-ide-led-to-500k-crypto-theft/">https://www.bleepingcomputer.com/news/security/malicious-vscode-extension-in-cursor-ide-led-to-500k-crypto-theft/</a></li></ul></li></ul><p><br>----------------------------------------------------------------------------------------------<br>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a><br>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a><br>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a><br>Bronwen Aker -<a href="http://blackhillsinfosec.com/team/bronwen-aker/"> http://blackhillsinfosec.com/team/bronwen-aker/</a><br>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro</li>
<li>(00:31) - “Poisoned doc” exfiltrates data via ChatGPT Connectors (AgentFlayer)</li>
<li>(01:15) - A zero-click prompt injection</li>
<li>(02:12) - url_safe bypassed using URLs from Microsoft’s Azure Blob cloud storage</li>
<li>(07:08) - Poisoned Google Calendar invite hijacks Gemini to control a smart home</li>
<li>(08:35) - The intersection of AI and IOT</li>
<li>(09:53) - Be careful what you hook AI up to</li>
<li>(10:23) - Derek warns of threat to power grid</li>
<li>(11:54) - Mitigations - restrict permissions, sanitize calendar content</li>
<li>(13:56) - Patch Tuesday - AI-surface fixes; critical Azure OpenAI vuln</li>
<li>(15:49) - NIST proposes SP 800-53 “Control Overlays for Securing AI Systems”</li>
<li>(18:43) - IBM “Cost of a Data Breach”: AI is both breach vector and defender</li>
<li>(19:16) - Shadow AI</li>
<li>(21:49) - “The AI adoption curve is outpacing controls”</li>
<li>(23:02) - OpenAI considers encrypting Temporary Chats</li>
<li>(26:39) - Data storage and logging LLM interactions</li>
<li>(29:59) - Fake VS Code extension for Cursor leads to $500K crypto theft</li>
<li>(30:37) - Danger of using pip install as root on a server</li>
</ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p>AI News of the Month | Episode 19</p><p>In Episode 19,Brianand Derek cover a zero-click indirect prompt injection attack against ChatGPT connectors and seemingly innocent Google Calendar events that hijack smart homes via Gemini, with possible consequences for the power grid.</p><p>They'll discuss the impact of Microsoft patching a critical Azure OpenAI SSRF vulnerability and go over new NIST AI security standards, IBM’s study on shadow AI and breach costs, OpenAI’s response to chat indexing leaks, and a malicious VS Code extension that stole $500K in cryptocurrency. </p><p>#AI #CyberSecurity #PromptInjection #Malware #InfoSec #AIThreats #Hacking #GenerativeAI #Deepfakes #LLM #ShadowAI</p><ul><li>“Poisoned doc” exfiltrates data via ChatGPT Connectors (AgentFlayer) — Aug 6, 2025<ul><li>Primary: <a href="https://www.wired.com/story/poisoned-document-could-leak-secret-data-chatgpt/">https://www.wired.com/story/poisoned-document-could-leak-secret-data-chatgpt/</a></li><li>Tech write-up: <a href="https://labs.zenity.io/p/agentflayer-chatgpt-connectors-0click-attack-5b41">https://labs.zenity.io/p/agentflayer-chatgpt-connectors-0click-attack-5b41</a></li></ul></li></ul><p><br></p><ul><li>Poisoned Google Calendar invite hijacks Gemini to control a smart home — Aug 6–10, 2025<ul><li>Primary: <a href="https://www.wired.com/story/google-gemini-calendar-invite-hijack-smart-home/">https://www.wired.com/story/google-gemini-calendar-invite-hijack-smart-home/</a></li><li>Bug/patch coverage: <a href="https://www.bleepingcomputer.com/news/security/google-calendar-invites-let-researchers-hijack-gemini-to-leak-user-data/">https://www.bleepingcomputer.com/news/security/google-calendar-invites-let-researchers-hijack-gemini-to-leak-user-data/</a></li></ul></li></ul><p><br></p><ul><li>Microsoft August Patch Tuesday adds AI-surface fixes; critical Azure OpenAI vuln (CVE-2025-53767) — Aug 12–13, 2025<ul><li>Release coverage: <a href="https://www.techradar.com/pro/security/microsofts-latest-major-patch-fixes-a-serious-zero-day-flaw-and-a-host-of-other-issues-so-update-now">https://www.techradar.com/pro/security/microsofts-latest-major-patch-fixes-a-serious-zero-day-flaw-and-a-host-of-other-issues-so-update-now</a></li><li>CVE entry: <a href="https://nvd.nist.gov/vuln/detail/CVE-2025-53767">https://nvd.nist.gov/vuln/detail/CVE-2025-53767</a> (NVD)</li><li>Overview: <a href="https://www.tenable.com/blog/microsofts-august-2025-patch-tuesday-addresses-107-cves-cve-2025-53779">https://www.tenable.com/blog/microsofts-august-2025-patch-tuesday-addresses-107-cves-cve-2025-53779</a> (Tenable®)</li></ul></li></ul><p><br></p><ul><li>NIST proposes SP 800-53 “Control Overlays for Securing AI Systems” — Aug 14, 2025<ul><li>Announcement: <a href="https://www.nist.gov/news-events/news/2025/08/nist-releases-control-overlays-securing-ai-systems-concept-paper">https://www.nist.gov/news-events/news/2025/08/nist-releases-control-overlays-securing-ai-systems-concept-paper</a></li><li>Concept paper (PDF): <a href="https://csrc.nist.gov/csrc/media/Projects/cosais/documents/NIST-Overlays-SecuringAI-concept-paper.pdf">https://csrc.nist.gov/csrc/media/Projects/cosais/documents/NIST-Overlays-SecuringAI-concept-paper.pdf</a></li></ul></li></ul><p><br></p><ul><li>IBM 2025 “Cost of a Data Breach”: AI is both breach vector and defender — Jul 30, 2025<ul><li>Press release: <a href="https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications%2C-97-of-which-reported-lacking-proper-ai-access-controls">https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications%2C-97-of-which-reported-lacking-proper-ai-access-controls</a></li><li>Report: https://www.ibm.com/reports/data-breach</li><li>Analysis: <a href="https://venturebeat.com/security/ibm-shadow-ai-breaches-cost-670k-more-97-of-firms-lack-controls/">https://venturebeat.com/security/ibm-shadow-ai-breaches-cost-670k-more-97-of-firms-lack-controls/</a> (VentureBeat)</li></ul></li></ul><p><br></p><ul><li>OpenAI considers encrypting Temporary Chats; privacy clean-ups after search-indexing scare — Aug 18, 2025<ul><li>Interview: <a href="https://www.axios.com/2025/08/18/altman-openai-chatgpt-encrypted-chats">https://www.axios.com/2025/08/18/altman-openai-chatgpt-encrypted-chats</a></li><li>Context: <a href="https://arstechnica.com/tech-policy/2025/08/chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results/">https://arstechnica.com/tech-policy/2025/08/chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results/</a></li><li>Help center (retention): <a href="https://help.openai.com/en/articles/8914046-temporary-chat-faq">https://help.openai.com/en/articles/8914046-temporary-chat-faq</a></li></ul></li></ul><p><br></p><ul><li>Fake VS Code extension for Cursor leads to $500K crypto theft — July 11, 2025<ul><li>Primary: <a href="https://www.scworld.com/news/fake-visual-studio-code-extension-for-cursor-led-to-500k-theft%20SC%20Media">https://www.scworld.com/news/fake-visual-studio-code-extension-for-cursor-led-to-500k-theft SC Media</a></li><li>Research write-up: <a href="https://securelist.com/open-source-package-for-cursor-ai-turned-into-a-crypto-heist/116908/Securelist">https://securelist.com/open-source-package-for-cursor-ai-turned-into-a-crypto-heist/116908/Securelist</a></li><li>Coverage: <a href="https://www.bleepingcomputer.com/news/security/malicious-vscode-extension-in-cursor-ide-led-to-500k-crypto-theft/">https://www.bleepingcomputer.com/news/security/malicious-vscode-extension-in-cursor-ide-led-to-500k-crypto-theft/</a></li></ul></li></ul><p><br>----------------------------------------------------------------------------------------------<br>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a><br>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a><br>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a><br>Bronwen Aker -<a href="http://blackhillsinfosec.com/team/bronwen-aker/"> http://blackhillsinfosec.com/team/bronwen-aker/</a><br>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro</li>
<li>(00:31) - “Poisoned doc” exfiltrates data via ChatGPT Connectors (AgentFlayer)</li>
<li>(01:15) - A zero-click prompt injection</li>
<li>(02:12) - url_safe bypassed using URLs from Microsoft’s Azure Blob cloud storage</li>
<li>(07:08) - Poisoned Google Calendar invite hijacks Gemini to control a smart home</li>
<li>(08:35) - The intersection of AI and IOT</li>
<li>(09:53) - Be careful what you hook AI up to</li>
<li>(10:23) - Derek warns of threat to power grid</li>
<li>(11:54) - Mitigations - restrict permissions, sanitize calendar content</li>
<li>(13:56) - Patch Tuesday - AI-surface fixes; critical Azure OpenAI vuln</li>
<li>(15:49) - NIST proposes SP 800-53 “Control Overlays for Securing AI Systems”</li>
<li>(18:43) - IBM “Cost of a Data Breach”: AI is both breach vector and defender</li>
<li>(19:16) - Shadow AI</li>
<li>(21:49) - “The AI adoption curve is outpacing controls”</li>
<li>(23:02) - OpenAI considers encrypting Temporary Chats</li>
<li>(26:39) - Data storage and logging LLM interactions</li>
<li>(29:59) - Fake VS Code extension for Cursor leads to $500K crypto theft</li>
<li>(30:37) - Danger of using pip install as root on a server</li>
</ul>]]>
      </content:encoded>
      <pubDate>Thu, 04 Sep 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/a7955d77/97319cb4.mp3" length="35907641" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/lSItytjL8CeUGTpywpgvMpY0WTB4vk3Rx-LzDuiihgo/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kMWU4/MGU5NTIwMGU4MGU5/NmYyNTVkZTRiYWIx/MDkxZC5wbmc.jpg"/>
      <itunes:duration>2237</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p>AI News of the Month | Episode 19</p><p>In Episode 19,Brianand Derek cover a zero-click indirect prompt injection attack against ChatGPT connectors and seemingly innocent Google Calendar events that hijack smart homes via Gemini, with possible consequences for the power grid.</p><p>They'll discuss the impact of Microsoft patching a critical Azure OpenAI SSRF vulnerability and go over new NIST AI security standards, IBM’s study on shadow AI and breach costs, OpenAI’s response to chat indexing leaks, and a malicious VS Code extension that stole $500K in cryptocurrency. </p><p>#AI #CyberSecurity #PromptInjection #Malware #InfoSec #AIThreats #Hacking #GenerativeAI #Deepfakes #LLM #ShadowAI</p><ul><li>“Poisoned doc” exfiltrates data via ChatGPT Connectors (AgentFlayer) — Aug 6, 2025<ul><li>Primary: <a href="https://www.wired.com/story/poisoned-document-could-leak-secret-data-chatgpt/">https://www.wired.com/story/poisoned-document-could-leak-secret-data-chatgpt/</a></li><li>Tech write-up: <a href="https://labs.zenity.io/p/agentflayer-chatgpt-connectors-0click-attack-5b41">https://labs.zenity.io/p/agentflayer-chatgpt-connectors-0click-attack-5b41</a></li></ul></li></ul><p><br></p><ul><li>Poisoned Google Calendar invite hijacks Gemini to control a smart home — Aug 6–10, 2025<ul><li>Primary: <a href="https://www.wired.com/story/google-gemini-calendar-invite-hijack-smart-home/">https://www.wired.com/story/google-gemini-calendar-invite-hijack-smart-home/</a></li><li>Bug/patch coverage: <a href="https://www.bleepingcomputer.com/news/security/google-calendar-invites-let-researchers-hijack-gemini-to-leak-user-data/">https://www.bleepingcomputer.com/news/security/google-calendar-invites-let-researchers-hijack-gemini-to-leak-user-data/</a></li></ul></li></ul><p><br></p><ul><li>Microsoft August Patch Tuesday adds AI-surface fixes; critical Azure OpenAI vuln (CVE-2025-53767) — Aug 12–13, 2025<ul><li>Release coverage: <a href="https://www.techradar.com/pro/security/microsofts-latest-major-patch-fixes-a-serious-zero-day-flaw-and-a-host-of-other-issues-so-update-now">https://www.techradar.com/pro/security/microsofts-latest-major-patch-fixes-a-serious-zero-day-flaw-and-a-host-of-other-issues-so-update-now</a></li><li>CVE entry: <a href="https://nvd.nist.gov/vuln/detail/CVE-2025-53767">https://nvd.nist.gov/vuln/detail/CVE-2025-53767</a> (NVD)</li><li>Overview: <a href="https://www.tenable.com/blog/microsofts-august-2025-patch-tuesday-addresses-107-cves-cve-2025-53779">https://www.tenable.com/blog/microsofts-august-2025-patch-tuesday-addresses-107-cves-cve-2025-53779</a> (Tenable®)</li></ul></li></ul><p><br></p><ul><li>NIST proposes SP 800-53 “Control Overlays for Securing AI Systems” — Aug 14, 2025<ul><li>Announcement: <a href="https://www.nist.gov/news-events/news/2025/08/nist-releases-control-overlays-securing-ai-systems-concept-paper">https://www.nist.gov/news-events/news/2025/08/nist-releases-control-overlays-securing-ai-systems-concept-paper</a></li><li>Concept paper (PDF): <a href="https://csrc.nist.gov/csrc/media/Projects/cosais/documents/NIST-Overlays-SecuringAI-concept-paper.pdf">https://csrc.nist.gov/csrc/media/Projects/cosais/documents/NIST-Overlays-SecuringAI-concept-paper.pdf</a></li></ul></li></ul><p><br></p><ul><li>IBM 2025 “Cost of a Data Breach”: AI is both breach vector and defender — Jul 30, 2025<ul><li>Press release: <a href="https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications%2C-97-of-which-reported-lacking-proper-ai-access-controls">https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications%2C-97-of-which-reported-lacking-proper-ai-access-controls</a></li><li>Report: https://www.ibm.com/reports/data-breach</li><li>Analysis: <a href="https://venturebeat.com/security/ibm-shadow-ai-breaches-cost-670k-more-97-of-firms-lack-controls/">https://venturebeat.com/security/ibm-shadow-ai-breaches-cost-670k-more-97-of-firms-lack-controls/</a> (VentureBeat)</li></ul></li></ul><p><br></p><ul><li>OpenAI considers encrypting Temporary Chats; privacy clean-ups after search-indexing scare — Aug 18, 2025<ul><li>Interview: <a href="https://www.axios.com/2025/08/18/altman-openai-chatgpt-encrypted-chats">https://www.axios.com/2025/08/18/altman-openai-chatgpt-encrypted-chats</a></li><li>Context: <a href="https://arstechnica.com/tech-policy/2025/08/chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results/">https://arstechnica.com/tech-policy/2025/08/chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results/</a></li><li>Help center (retention): <a href="https://help.openai.com/en/articles/8914046-temporary-chat-faq">https://help.openai.com/en/articles/8914046-temporary-chat-faq</a></li></ul></li></ul><p><br></p><ul><li>Fake VS Code extension for Cursor leads to $500K crypto theft — July 11, 2025<ul><li>Primary: <a href="https://www.scworld.com/news/fake-visual-studio-code-extension-for-cursor-led-to-500k-theft%20SC%20Media">https://www.scworld.com/news/fake-visual-studio-code-extension-for-cursor-led-to-500k-theft SC Media</a></li><li>Research write-up: <a href="https://securelist.com/open-source-package-for-cursor-ai-turned-into-a-crypto-heist/116908/Securelist">https://securelist.com/open-source-package-for-cursor-ai-turned-into-a-crypto-heist/116908/Securelist</a></li><li>Coverage: <a href="https://www.bleepingcomputer.com/news/security/malicious-vscode-extension-in-cursor-ide-led-to-500k-crypto-theft/">https://www.bleepingcomputer.com/news/security/malicious-vscode-extension-in-cursor-ide-led-to-500k-crypto-theft/</a></li></ul></li></ul><p><br>----------------------------------------------------------------------------------------------<br>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a><br>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a><br>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a><br>Bronwen Aker -<a href="http://blackhillsinfosec.com/team/bronwen-aker/"> http://blackhillsinfosec.com/team/bronwen-aker/</a><br>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro</li>
<li>(00:31) - “Poisoned doc” exfiltrates data via ChatGPT Connectors (AgentFlayer)</li>
<li>(01:15) - A zero-click prompt injection</li>
<li>(02:12) - url_safe bypassed using URLs from Microsoft’s Azure Blob cloud storage</li>
<li>(07:08) - Poisoned Google Calendar invite hijacks Gemini to control a smart home</li>
<li>(08:35) - The intersection of AI and IOT</li>
<li>(09:53) - Be careful what you hook AI up to</li>
<li>(10:23) - Derek warns of threat to power grid</li>
<li>(11:54) - Mitigations - restrict permissions, sanitize calendar content</li>
<li>(13:56) - Patch Tuesday - AI-surface fixes; critical Azure OpenAI vuln</li>
<li>(15:49) - NIST proposes SP 800-53 “Control Overlays for Securing AI Systems”</li>
<li>(18:43) - IBM “Cost of a Data Breach”: AI is both breach vector and defender</li>
<li>(19:16) - Shadow AI</li>
<li>(21:49) - “The AI adoption curve is outpacing controls”</li>
<li>(23:02) - OpenAI considers encrypting Temporary Chats</li>
<li>(26:39) - Data storage and logging LLM interactions</li>
<li>(29:59) - Fake VS Code extension for Cursor leads to $500K crypto theft</li>
<li>(30:37) - Danger of using pip install as root on a server</li>
</ul>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/a7955d77/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Malware in the Age of AI | EP 18</title>
      <itunes:episode>18</itunes:episode>
      <podcast:episode>18</podcast:episode>
      <itunes:title>Malware in the Age of AI | EP 18</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a99b82d5-912f-486f-9115-871908aec7cb</guid>
      <link>https://share.transistor.fm/s/76bfb37a</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p><br>Malware in the Age of AI | Episode 18</p><p>In Episode 18, hosts Joff Thyer, Derek Banks and Brian Fehrman discuss the rise of AI-powered malware. From polymorphic keyloggers like Black Mamba to the use of ChatGPT, WormGPT, and fine-tuned LLMs for cyberattacks, the team will explain how generative AI is reshaping the security landscape.</p><p>They'll break down the real risks vs. hype, including prompt injection, jailbreaking, deepfakes, and AI-driven fraud, while also sharing strategies defenders can use to fight back.</p><p>The discussion highlights both the ethical implications and the critical need for defense-in-depth as threat actors use AI to accelerate their attacks.</p><p><br>#AI #Cybersecurity #Malware #AIThreats #Deepfakes #LLM #InfoSec #AIinSecurity #GenerativeAI #Hacking</p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker -<a href="http://blackhillsinfosec.com/team/bronwen-aker/"> http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro</li>
<li>(01:15) - Black Mamba polymorphic AI keylogger</li>
<li>(02:47) - Can Chat GPT5 generate malware for us?</li>
<li>(03:42) - Guardrail circumvention technique #1</li>
<li>(04:16) - Guardrail circumvention technique #2</li>
<li>(05:30) - Guardrail circumvention technique #3</li>
<li>(05:59) - Guardrail circumvention technique #4</li>
<li>(06:30) - Using an Abliterated Model</li>
<li>(08:32) - AI models have democratized software creation</li>
<li>(11:20) - Polymorphic keyloggers are not new</li>
<li>(12:03) - AI makes it faster to iterate polymorphic malware</li>
<li>(12:33) - AI is able to analyze source code and find more vulnerabilities</li>
<li>(15:16) - How scared should we be? (hype vs reality)</li>
<li>(16:10) - Knowing enough to ask the right questions is important</li>
<li>(17:41) - Significant risks of AI fraud and social engineering</li>
<li>(19:32) - Business email compromise</li>
<li>(21:10) - How defenders can use AI</li>
<li>(24:28) - Audio deepfakes have become easier to create</li>
<li>(25:06) - Ethical concerns for pentesters using AI</li>
<li>(29:26) - In one sentence, how will AI change malware production in the near future?</li>
</ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p><br>Malware in the Age of AI | Episode 18</p><p>In Episode 18, hosts Joff Thyer, Derek Banks and Brian Fehrman discuss the rise of AI-powered malware. From polymorphic keyloggers like Black Mamba to the use of ChatGPT, WormGPT, and fine-tuned LLMs for cyberattacks, the team will explain how generative AI is reshaping the security landscape.</p><p>They'll break down the real risks vs. hype, including prompt injection, jailbreaking, deepfakes, and AI-driven fraud, while also sharing strategies defenders can use to fight back.</p><p>The discussion highlights both the ethical implications and the critical need for defense-in-depth as threat actors use AI to accelerate their attacks.</p><p><br>#AI #Cybersecurity #Malware #AIThreats #Deepfakes #LLM #InfoSec #AIinSecurity #GenerativeAI #Hacking</p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker -<a href="http://blackhillsinfosec.com/team/bronwen-aker/"> http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro</li>
<li>(01:15) - Black Mamba polymorphic AI keylogger</li>
<li>(02:47) - Can Chat GPT5 generate malware for us?</li>
<li>(03:42) - Guardrail circumvention technique #1</li>
<li>(04:16) - Guardrail circumvention technique #2</li>
<li>(05:30) - Guardrail circumvention technique #3</li>
<li>(05:59) - Guardrail circumvention technique #4</li>
<li>(06:30) - Using an Abliterated Model</li>
<li>(08:32) - AI models have democratized software creation</li>
<li>(11:20) - Polymorphic keyloggers are not new</li>
<li>(12:03) - AI makes it faster to iterate polymorphic malware</li>
<li>(12:33) - AI is able to analyze source code and find more vulnerabilities</li>
<li>(15:16) - How scared should we be? (hype vs reality)</li>
<li>(16:10) - Knowing enough to ask the right questions is important</li>
<li>(17:41) - Significant risks of AI fraud and social engineering</li>
<li>(19:32) - Business email compromise</li>
<li>(21:10) - How defenders can use AI</li>
<li>(24:28) - Audio deepfakes have become easier to create</li>
<li>(25:06) - Ethical concerns for pentesters using AI</li>
<li>(29:26) - In one sentence, how will AI change malware production in the near future?</li>
</ul>]]>
      </content:encoded>
      <pubDate>Thu, 28 Aug 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/76bfb37a/f58af94e.mp3" length="31514000" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/sjQm1GrfUYymnl01hE8Rjnw7fv3BV8tEd-a30jXi8R0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yZmZk/YmM2Yjg1MTc3MDg3/MWNjN2E4YThmZjc3/ZGI1OC5wbmc.jpg"/>
      <itunes:duration>1962</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p><br>Malware in the Age of AI | Episode 18</p><p>In Episode 18, hosts Joff Thyer, Derek Banks and Brian Fehrman discuss the rise of AI-powered malware. From polymorphic keyloggers like Black Mamba to the use of ChatGPT, WormGPT, and fine-tuned LLMs for cyberattacks, the team will explain how generative AI is reshaping the security landscape.</p><p>They'll break down the real risks vs. hype, including prompt injection, jailbreaking, deepfakes, and AI-driven fraud, while also sharing strategies defenders can use to fight back.</p><p>The discussion highlights both the ethical implications and the critical need for defense-in-depth as threat actors use AI to accelerate their attacks.</p><p><br>#AI #Cybersecurity #Malware #AIThreats #Deepfakes #LLM #InfoSec #AIinSecurity #GenerativeAI #Hacking</p><p><br></p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker -<a href="http://blackhillsinfosec.com/team/bronwen-aker/"> http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro</li>
<li>(01:15) - Black Mamba polymorphic AI keylogger</li>
<li>(02:47) - Can Chat GPT5 generate malware for us?</li>
<li>(03:42) - Guardrail circumvention technique #1</li>
<li>(04:16) - Guardrail circumvention technique #2</li>
<li>(05:30) - Guardrail circumvention technique #3</li>
<li>(05:59) - Guardrail circumvention technique #4</li>
<li>(06:30) - Using an Abliterated Model</li>
<li>(08:32) - AI models have democratized software creation</li>
<li>(11:20) - Polymorphic keyloggers are not new</li>
<li>(12:03) - AI makes it faster to iterate polymorphic malware</li>
<li>(12:33) - AI is able to analyze source code and find more vulnerabilities</li>
<li>(15:16) - How scared should we be? (hype vs reality)</li>
<li>(16:10) - Knowing enough to ask the right questions is important</li>
<li>(17:41) - Significant risks of AI fraud and social engineering</li>
<li>(19:32) - Business email compromise</li>
<li>(21:10) - How defenders can use AI</li>
<li>(24:28) - Audio deepfakes have become easier to create</li>
<li>(25:06) - Ethical concerns for pentesters using AI</li>
<li>(29:26) - In one sentence, how will AI change malware production in the near future?</li>
</ul>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/76bfb37a/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Community Q&amp;A | Episode 17</title>
      <itunes:episode>17</itunes:episode>
      <podcast:episode>17</podcast:episode>
      <itunes:title>Community Q&amp;A | Episode 17</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6c48964b-bcdc-4620-9721-27c3c3870468</guid>
      <link>https://share.transistor.fm/s/4b24e2a4</link>
      <description>
        <![CDATA[<p>Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p>Community Q&amp;A | Episode 17</p><p>In episode 17 of the AI Security Ops Podcast, hosts Joff Thyer, Derek Banks, Brian Fehrman and Bronwen Aker answer viewer-submitted questions about system prompts, prompt injection risks, AI hallucinations, deep fakes, and when (and when not) to use AI in cybersecurity. </p><p>They'll discuss the difference between system and user prompts, how temperature settings impact LLM outputs, and the biggest mistakes companies make when deploying AI models. </p><p>They'll also explain how to reduce hallucinations, and approach AI responsibly in security workflows. Derek explains his method for detecting audio deep fakes.</p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker -<a href="http://blackhillsinfosec.com/team/bronwen-aker/"> http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro</li>
<li>(01:10) - What is a system prompt? How is it different from a user prompt?</li>
<li>(03:35) - What are some common system prompt mistakes?</li>
<li>(06:54) - Does repeating a prompt give different responses? (non-deterministic)</li>
<li>(07:56) - The temperature knob effect</li>
<li>(12:18) - When should I use AI? When should I not?</li>
<li>(16:47) - What are best practices to reduce hallucinations?</li>
<li>(20:29) - End-user temperature knob work-around</li>
<li>(22:55) - AI bots that rewrite their code to avoid shutdown commands</li>
<li>(26:53) - NCSL.org - Updates on legislation affecting AI</li>
<li>(29:44) - How do we detect AI deep fakes?</li>
<li>(30:00) - Derek’s DeepFake demo video</li>
<li>(30:38) - DISCLAIMER - Do Not use AI deep fakes to break the law!</li>
<li>(31:29) - F5-tts.org - Deep fake website</li>
<li>(35:02) - Derek pranks his family using AI</li>
</ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p>Community Q&amp;A | Episode 17</p><p>In episode 17 of the AI Security Ops Podcast, hosts Joff Thyer, Derek Banks, Brian Fehrman and Bronwen Aker answer viewer-submitted questions about system prompts, prompt injection risks, AI hallucinations, deep fakes, and when (and when not) to use AI in cybersecurity. </p><p>They'll discuss the difference between system and user prompts, how temperature settings impact LLM outputs, and the biggest mistakes companies make when deploying AI models. </p><p>They'll also explain how to reduce hallucinations, and approach AI responsibly in security workflows. Derek explains his method for detecting audio deep fakes.</p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker -<a href="http://blackhillsinfosec.com/team/bronwen-aker/"> http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro</li>
<li>(01:10) - What is a system prompt? How is it different from a user prompt?</li>
<li>(03:35) - What are some common system prompt mistakes?</li>
<li>(06:54) - Does repeating a prompt give different responses? (non-deterministic)</li>
<li>(07:56) - The temperature knob effect</li>
<li>(12:18) - When should I use AI? When should I not?</li>
<li>(16:47) - What are best practices to reduce hallucinations?</li>
<li>(20:29) - End-user temperature knob work-around</li>
<li>(22:55) - AI bots that rewrite their code to avoid shutdown commands</li>
<li>(26:53) - NCSL.org - Updates on legislation affecting AI</li>
<li>(29:44) - How do we detect AI deep fakes?</li>
<li>(30:00) - Derek’s DeepFake demo video</li>
<li>(30:38) - DISCLAIMER - Do Not use AI deep fakes to break the law!</li>
<li>(31:29) - F5-tts.org - Deep fake website</li>
<li>(35:02) - Derek pranks his family using AI</li>
</ul>]]>
      </content:encoded>
      <pubDate>Thu, 21 Aug 2025 15:04:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/4b24e2a4/fcdc3a65.mp3" length="35921802" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/ZAI9OPxiVA9bVrdeCVCreoIqf9f2vQuZ5V7ebPVkfg4/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80MzA3/ODdiZmU1MjI2YmIz/ZDc3YzIzZWJhNmRl/YmRmOS5wbmc.jpg"/>
      <itunes:duration>2238</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com/">https://poweredbybhis.com</a></p><p>Community Q&amp;A | Episode 17</p><p>In episode 17 of the AI Security Ops Podcast, hosts Joff Thyer, Derek Banks, Brian Fehrman and Bronwen Aker answer viewer-submitted questions about system prompts, prompt injection risks, AI hallucinations, deep fakes, and when (and when not) to use AI in cybersecurity. </p><p>They'll discuss the difference between system and user prompts, how temperature settings impact LLM outputs, and the biggest mistakes companies make when deploying AI models. </p><p>They'll also explain how to reduce hallucinations, and approach AI responsibly in security workflows. Derek explains his method for detecting audio deep fakes.</p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker -<a href="http://blackhillsinfosec.com/team/bronwen-aker/"> http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>
<ul><li>(00:00) - Intro</li>
<li>(01:10) - What is a system prompt? How is it different from a user prompt?</li>
<li>(03:35) - What are some common system prompt mistakes?</li>
<li>(06:54) - Does repeating a prompt give different responses? (non-deterministic)</li>
<li>(07:56) - The temperature knob effect</li>
<li>(12:18) - When should I use AI? When should I not?</li>
<li>(16:47) - What are best practices to reduce hallucinations?</li>
<li>(20:29) - End-user temperature knob work-around</li>
<li>(22:55) - AI bots that rewrite their code to avoid shutdown commands</li>
<li>(26:53) - NCSL.org - Updates on legislation affecting AI</li>
<li>(29:44) - How do we detect AI deep fakes?</li>
<li>(30:00) - Derek’s DeepFake demo video</li>
<li>(30:38) - DISCLAIMER - Do Not use AI deep fakes to break the law!</li>
<li>(31:29) - F5-tts.org - Deep fake website</li>
<li>(35:02) - Derek pranks his family using AI</li>
</ul>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/4b24e2a4/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>A Conversation with Daniel Miessler | Episode 16</title>
      <itunes:episode>16</itunes:episode>
      <podcast:episode>16</podcast:episode>
      <itunes:title>A Conversation with Daniel Miessler | Episode 16</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f759ef18-bff0-476f-aa55-f848dd8bf50c</guid>
      <link>https://share.transistor.fm/s/cb6b4b8f</link>
      <description>
        <![CDATA[<p>A Conversation with Daniel Miessler</p><p>In Episode 16, Joff and the team welcome human-centric AI innovator Daniel Miessler, creator of Fabric, an AI framework for solving real-world problems from a human perspective.</p><p>The conversation covers AI’s role in cybersecurity, the importance of clarity in “intent engineering” over prompt tricks, and the risks and opportunities of deploying large language models. They explore the shift from “vibe coding” to “spec coding,” the rise of AI scaffolding over raw model improvements, and what AI advancements including GPT-5 mean for the future of knowledge work.</p><p><br>"Introducing Fabric — A Human AI Augmentation Framework"<br><a href="https://www.youtube.com/watch?v=wPEyyigh10g">https://www.youtube.com/watch?v=wPEyyigh10g</a></p><p>Daniel's GitHub repository:<br><a href="https://github.com/danielmiessler/Fabric">https://github.com/danielmiessler/Fabric</a></p><p><br>#AI #CyberSecurity #AgenticAI #SecurityOps #PromptEngineering</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>A Conversation with Daniel Miessler</p><p>In Episode 16, Joff and the team welcome human-centric AI innovator Daniel Miessler, creator of Fabric, an AI framework for solving real-world problems from a human perspective.</p><p>The conversation covers AI’s role in cybersecurity, the importance of clarity in “intent engineering” over prompt tricks, and the risks and opportunities of deploying large language models. They explore the shift from “vibe coding” to “spec coding,” the rise of AI scaffolding over raw model improvements, and what AI advancements including GPT-5 mean for the future of knowledge work.</p><p><br>"Introducing Fabric — A Human AI Augmentation Framework"<br><a href="https://www.youtube.com/watch?v=wPEyyigh10g">https://www.youtube.com/watch?v=wPEyyigh10g</a></p><p>Daniel's GitHub repository:<br><a href="https://github.com/danielmiessler/Fabric">https://github.com/danielmiessler/Fabric</a></p><p><br>#AI #CyberSecurity #AgenticAI #SecurityOps #PromptEngineering</p>]]>
      </content:encoded>
      <pubDate>Thu, 14 Aug 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/cb6b4b8f/7ccf9e8d.mp3" length="43241921" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/i-fo4OQKUQEU-CE_FnyhIR3cyhTQZC5rzZ7PfqsqKOk/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83MDQ5/MDQxOTgwZTMyZDll/ZTk4YzBlNjdhMGE2/NTM2OC5wbmc.jpg"/>
      <itunes:duration>2695</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>A Conversation with Daniel Miessler</p><p>In Episode 16, Joff and the team welcome human-centric AI innovator Daniel Miessler, creator of Fabric, an AI framework for solving real-world problems from a human perspective.</p><p>The conversation covers AI’s role in cybersecurity, the importance of clarity in “intent engineering” over prompt tricks, and the risks and opportunities of deploying large language models. They explore the shift from “vibe coding” to “spec coding,” the rise of AI scaffolding over raw model improvements, and what AI advancements including GPT-5 mean for the future of knowledge work.</p><p><br>"Introducing Fabric — A Human AI Augmentation Framework"<br><a href="https://www.youtube.com/watch?v=wPEyyigh10g">https://www.youtube.com/watch?v=wPEyyigh10g</a></p><p>Daniel's GitHub repository:<br><a href="https://github.com/danielmiessler/Fabric">https://github.com/danielmiessler/Fabric</a></p><p><br>#AI #CyberSecurity #AgenticAI #SecurityOps #PromptEngineering</p>]]>
      </itunes:summary>
      <itunes:keywords>AI,Fabric,Daniel Miessler</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Guest" href="https://danielmiessler.com" img="https://img.transistorcdn.com/33wYrhZoEpm1o4QAYD2a0oB_xqaa4Nb_6Shdy8X5S7M/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNTM1/MmUwZGEwNjZiMTBl/MDJhNTAxZmM3ZmJk/Yzk0OS5wbmc.jpg">Daniel Miessler</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/cb6b4b8f/transcription.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/cb6b4b8f/transcription.srt" type="application/x-subrip" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/cb6b4b8f/transcription.json" type="application/json" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/cb6b4b8f/transcription.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/cb6b4b8f/transcription" type="text/html"/>
    </item>
    <item>
      <title>News of the Month – Episode 15</title>
      <itunes:episode>15</itunes:episode>
      <podcast:episode>15</podcast:episode>
      <itunes:title>News of the Month – Episode 15</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">8fe014d4-e540-49fd-b8ad-498a192619b0</guid>
      <link>https://share.transistor.fm/s/7e837981</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>In this episode, we'll discuss Palo Alto Networks’ acquisition of Protect AI, the rise of “Shadow AI” in enterprises, alarming AI-driven data leaks, and vibe coding gone wrong. We'll dive into critical issues like AI hallucinations and the growing need for "human in the loop" oversight. We'll wrap up with a discussion of Proton’s Lumo AI chatbot, disappearing medical disclaimers in AI chatbots and data poisoning in Amazon's AI coding agent.</p><p><br></p><p>#AI #Cybersecurity #LLM #AInews #AISecurityOps #BlackHillsInfosec #LLMGuard #ShadowAI #DataLeak #AgenticAI #PrivacyTech #VibeCoding #ProtectAI</p><p><br></p><p><br></p><p><br></p><p>00:00 - Welcome, Intro</p><p>00:58 - Palo Alto Networks Completes Acquisition of Protect AI</p><p><a href="https://www.paloaltonetworks.com/company/press/2025/palo-alto-networks-completes-acquisition-of-protect-ai">https://www.paloaltonetworks.com/company/press/2025/palo-alto-networks-completes-acquisition-of-protect-ai</a></p><p>04:53 - Metomic Finds AI Data Leaks Impact 68% of Organizations, But Only 23% Have Proper AI Data Security Policies </p><p><a href="https://www.metomic.io/resource-centre/metomic-finds-ai-data-leaks-impact-68-of-organizations-but-only-23-have-proper-ai-data-security-policies">https://www.metomic.io/resource-centre/metomic-finds-ai-data-leaks-impact-68-of-organizations-but-only-23-have-proper-ai-data-security-policies</a></p><p>09:46 - S&amp;P 500’s AI adoption may invite data breaches, new research shows</p><p><a href="https://cybernews.com/security/sp-500-companies-ai-security-risks-report/">https://cybernews.com/security/sp-500-companies-ai-security-risks-report/</a></p><p>12:53 - Vibe Coding Fiasco: AI Agent Goes Rogue, Deletes Company's Entire Database</p><p><a href="https://www.pcmag.com/news/vibe-coding-fiasco-replite-ai-agent-goes-rogue-deletes-company-database">https://www.pcmag.com/news/vibe-coding-fiasco-replite-ai-agent-goes-rogue-deletes-company-database</a></p><p>18:47 - A major AI training data set contains millions of examples of personal data</p><p><a href="https://www.technologyreview.com/2025/07/18/1120466/a-major-ai-training-data-set-contains-millions-of-examples-of-personal-data/">https://www.technologyreview.com/2025/07/18/1120466/a-major-ai-training-data-set-contains-millions-of-examples-of-personal-data/</a></p><p>23:34 - Introducing Lumo, the AI where every conversation is confidential</p><p><a href="https://proton.me/blog/lumo-ai">https://proton.me/blog/lumo-ai</a></p><p>28:56 - AI companies have stopped warning you that their chatbots aren’t doctors</p><p><a href="https://www.technologyreview.com/2025/07/21/1120522/ai-companies-have-stopped-warning-you-that-their-chatbots-arent-doctors/">https://www.technologyreview.com/2025/07/21/1120522/ai-companies-have-stopped-warning-you-that-their-chatbots-arent-doctors/</a></p><p>36:53 - Hacker Plants Computer 'Wiping' Commands in Amazon's AI Coding Agent</p><p><a href="https://www.404media.co/hacker-plants-computer-wiping-commands-in-amazons-ai-coding-agent/">https://www.404media.co/hacker-plants-computer-wiping-commands-in-amazons-ai-coding-agent/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>In this episode, we'll discuss Palo Alto Networks’ acquisition of Protect AI, the rise of “Shadow AI” in enterprises, alarming AI-driven data leaks, and vibe coding gone wrong. We'll dive into critical issues like AI hallucinations and the growing need for "human in the loop" oversight. We'll wrap up with a discussion of Proton’s Lumo AI chatbot, disappearing medical disclaimers in AI chatbots and data poisoning in Amazon's AI coding agent.</p><p><br></p><p>#AI #Cybersecurity #LLM #AInews #AISecurityOps #BlackHillsInfosec #LLMGuard #ShadowAI #DataLeak #AgenticAI #PrivacyTech #VibeCoding #ProtectAI</p><p><br></p><p><br></p><p><br></p><p>00:00 - Welcome, Intro</p><p>00:58 - Palo Alto Networks Completes Acquisition of Protect AI</p><p><a href="https://www.paloaltonetworks.com/company/press/2025/palo-alto-networks-completes-acquisition-of-protect-ai">https://www.paloaltonetworks.com/company/press/2025/palo-alto-networks-completes-acquisition-of-protect-ai</a></p><p>04:53 - Metomic Finds AI Data Leaks Impact 68% of Organizations, But Only 23% Have Proper AI Data Security Policies </p><p><a href="https://www.metomic.io/resource-centre/metomic-finds-ai-data-leaks-impact-68-of-organizations-but-only-23-have-proper-ai-data-security-policies">https://www.metomic.io/resource-centre/metomic-finds-ai-data-leaks-impact-68-of-organizations-but-only-23-have-proper-ai-data-security-policies</a></p><p>09:46 - S&amp;P 500’s AI adoption may invite data breaches, new research shows</p><p><a href="https://cybernews.com/security/sp-500-companies-ai-security-risks-report/">https://cybernews.com/security/sp-500-companies-ai-security-risks-report/</a></p><p>12:53 - Vibe Coding Fiasco: AI Agent Goes Rogue, Deletes Company's Entire Database</p><p><a href="https://www.pcmag.com/news/vibe-coding-fiasco-replite-ai-agent-goes-rogue-deletes-company-database">https://www.pcmag.com/news/vibe-coding-fiasco-replite-ai-agent-goes-rogue-deletes-company-database</a></p><p>18:47 - A major AI training data set contains millions of examples of personal data</p><p><a href="https://www.technologyreview.com/2025/07/18/1120466/a-major-ai-training-data-set-contains-millions-of-examples-of-personal-data/">https://www.technologyreview.com/2025/07/18/1120466/a-major-ai-training-data-set-contains-millions-of-examples-of-personal-data/</a></p><p>23:34 - Introducing Lumo, the AI where every conversation is confidential</p><p><a href="https://proton.me/blog/lumo-ai">https://proton.me/blog/lumo-ai</a></p><p>28:56 - AI companies have stopped warning you that their chatbots aren’t doctors</p><p><a href="https://www.technologyreview.com/2025/07/21/1120522/ai-companies-have-stopped-warning-you-that-their-chatbots-arent-doctors/">https://www.technologyreview.com/2025/07/21/1120522/ai-companies-have-stopped-warning-you-that-their-chatbots-arent-doctors/</a></p><p>36:53 - Hacker Plants Computer 'Wiping' Commands in Amazon's AI Coding Agent</p><p><a href="https://www.404media.co/hacker-plants-computer-wiping-commands-in-amazons-ai-coding-agent/">https://www.404media.co/hacker-plants-computer-wiping-commands-in-amazons-ai-coding-agent/</a></p>]]>
      </content:encoded>
      <pubDate>Thu, 07 Aug 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/7e837981/83005399.mp3" length="37892897" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/MpqulyA0tXQSrxHmB69XBEUsnkhuFjea7chQfHatSPM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81MGRl/MTIzMmYyMTFkNjdk/M2Q4NTZlYmQ3YjY0/YjZkMi5wbmc.jpg"/>
      <itunes:duration>2360</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>In this episode, we'll discuss Palo Alto Networks’ acquisition of Protect AI, the rise of “Shadow AI” in enterprises, alarming AI-driven data leaks, and vibe coding gone wrong. We'll dive into critical issues like AI hallucinations and the growing need for "human in the loop" oversight. We'll wrap up with a discussion of Proton’s Lumo AI chatbot, disappearing medical disclaimers in AI chatbots and data poisoning in Amazon's AI coding agent.</p><p><br></p><p>#AI #Cybersecurity #LLM #AInews #AISecurityOps #BlackHillsInfosec #LLMGuard #ShadowAI #DataLeak #AgenticAI #PrivacyTech #VibeCoding #ProtectAI</p><p><br></p><p><br></p><p><br></p><p>00:00 - Welcome, Intro</p><p>00:58 - Palo Alto Networks Completes Acquisition of Protect AI</p><p><a href="https://www.paloaltonetworks.com/company/press/2025/palo-alto-networks-completes-acquisition-of-protect-ai">https://www.paloaltonetworks.com/company/press/2025/palo-alto-networks-completes-acquisition-of-protect-ai</a></p><p>04:53 - Metomic Finds AI Data Leaks Impact 68% of Organizations, But Only 23% Have Proper AI Data Security Policies </p><p><a href="https://www.metomic.io/resource-centre/metomic-finds-ai-data-leaks-impact-68-of-organizations-but-only-23-have-proper-ai-data-security-policies">https://www.metomic.io/resource-centre/metomic-finds-ai-data-leaks-impact-68-of-organizations-but-only-23-have-proper-ai-data-security-policies</a></p><p>09:46 - S&amp;P 500’s AI adoption may invite data breaches, new research shows</p><p><a href="https://cybernews.com/security/sp-500-companies-ai-security-risks-report/">https://cybernews.com/security/sp-500-companies-ai-security-risks-report/</a></p><p>12:53 - Vibe Coding Fiasco: AI Agent Goes Rogue, Deletes Company's Entire Database</p><p><a href="https://www.pcmag.com/news/vibe-coding-fiasco-replite-ai-agent-goes-rogue-deletes-company-database">https://www.pcmag.com/news/vibe-coding-fiasco-replite-ai-agent-goes-rogue-deletes-company-database</a></p><p>18:47 - A major AI training data set contains millions of examples of personal data</p><p><a href="https://www.technologyreview.com/2025/07/18/1120466/a-major-ai-training-data-set-contains-millions-of-examples-of-personal-data/">https://www.technologyreview.com/2025/07/18/1120466/a-major-ai-training-data-set-contains-millions-of-examples-of-personal-data/</a></p><p>23:34 - Introducing Lumo, the AI where every conversation is confidential</p><p><a href="https://proton.me/blog/lumo-ai">https://proton.me/blog/lumo-ai</a></p><p>28:56 - AI companies have stopped warning you that their chatbots aren’t doctors</p><p><a href="https://www.technologyreview.com/2025/07/21/1120522/ai-companies-have-stopped-warning-you-that-their-chatbots-arent-doctors/">https://www.technologyreview.com/2025/07/21/1120522/ai-companies-have-stopped-warning-you-that-their-chatbots-arent-doctors/</a></p><p>36:53 - Hacker Plants Computer 'Wiping' Commands in Amazon's AI Coding Agent</p><p><a href="https://www.404media.co/hacker-plants-computer-wiping-commands-in-amazons-ai-coding-agent/">https://www.404media.co/hacker-plants-computer-wiping-commands-in-amazons-ai-coding-agent/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
    </item>
    <item>
      <title>Questions From The Community podcast – Episode 14</title>
      <itunes:episode>14</itunes:episode>
      <podcast:episode>14</podcast:episode>
      <itunes:title>Questions From The Community podcast – Episode 14</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ce6fda54-d828-4555-a0f3-aade7c47e37b</guid>
      <link>https://share.transistor.fm/s/16d4d1b8</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p>In Episode 14 of the AI Security Ops Podcast, hosts Joff Thyer, Derek Banks, and Brian Fehrman answer questions submitted by viewers. </p><p>The team will cover how effective prompt engineering can transform LLMs into workflow accelerators, and debate AI tool strengths— when to use Claude, ChatGPT, or Notebook LM.</p><p>They'll discuss the importance of human oversight when integrating AI into operations, highlighting the "human-in-the-loop" concept and include ways to explain AI to non-technical audiences.</p><p><br></p><p>#AI #promptengineering #CyberSecurity #Automation #SecurityOps #claudeai #chatgpt </p><p><br></p><p>00:00 - Welcome, Intro</p><p>02:00 - Q - How do you use AI?</p><p>02:55 - The importance of effective prompt engineering</p><p>10:24 - Upcoming workshop - AI Workflow Optimization for Red Teaming</p><p>12:10 - Q - Which AI for which task? Where should I invest my time?</p><p>14:12 - Claude for coding in Python &amp; Golang, but not great at Java</p><p>16:35 - Derek - Initial prompt improvement in Chat GPT, then go to Claude</p><p>17:37 - NotebookLM for students (https://notebooklm.google/)</p><p>20:01 - Invest your time in prompt engineering - applicable to any model</p><p>22:38 - Double check code, understand what it means, do not blindly trust AI output</p><p>25:17 - Q - How to discuss AI with a non-technical audience</p><p>28:08 - Talk to LLMs like a child</p><p>28:54 - AI is not sentient, it's just drawing relevant correlations</p><p>31:48 - Ask them clarifying questions - what are they trying to ask? What's the context?</p><p>33:37 - Q - How can you do "Human in the Loop?"</p><p>35:24 - Don't give your agentic AI too much power - treat it like a junior assistant</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p>In Episode 14 of the AI Security Ops Podcast, hosts Joff Thyer, Derek Banks, and Brian Fehrman answer questions submitted by viewers. </p><p>The team will cover how effective prompt engineering can transform LLMs into workflow accelerators, and debate AI tool strengths— when to use Claude, ChatGPT, or Notebook LM.</p><p>They'll discuss the importance of human oversight when integrating AI into operations, highlighting the "human-in-the-loop" concept and include ways to explain AI to non-technical audiences.</p><p><br></p><p>#AI #promptengineering #CyberSecurity #Automation #SecurityOps #claudeai #chatgpt </p><p><br></p><p>00:00 - Welcome, Intro</p><p>02:00 - Q - How do you use AI?</p><p>02:55 - The importance of effective prompt engineering</p><p>10:24 - Upcoming workshop - AI Workflow Optimization for Red Teaming</p><p>12:10 - Q - Which AI for which task? Where should I invest my time?</p><p>14:12 - Claude for coding in Python &amp; Golang, but not great at Java</p><p>16:35 - Derek - Initial prompt improvement in Chat GPT, then go to Claude</p><p>17:37 - NotebookLM for students (https://notebooklm.google/)</p><p>20:01 - Invest your time in prompt engineering - applicable to any model</p><p>22:38 - Double check code, understand what it means, do not blindly trust AI output</p><p>25:17 - Q - How to discuss AI with a non-technical audience</p><p>28:08 - Talk to LLMs like a child</p><p>28:54 - AI is not sentient, it's just drawing relevant correlations</p><p>31:48 - Ask them clarifying questions - what are they trying to ask? What's the context?</p><p>33:37 - Q - How can you do "Human in the Loop?"</p><p>35:24 - Don't give your agentic AI too much power - treat it like a junior assistant</p>]]>
      </content:encoded>
      <pubDate>Thu, 31 Jul 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/16d4d1b8/c047ca2c.mp3" length="37148462" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/UEQoE8J1loYpDZGUAFa_lKux5VLp6xwil3vZKWOU2Y4/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNGVk/MTE3YmM2NzhjY2Vj/NTFlYmMyNmY1NzEx/ZDdiOC5qcGc.jpg"/>
      <itunes:duration>2313</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p>In Episode 14 of the AI Security Ops Podcast, hosts Joff Thyer, Derek Banks, and Brian Fehrman answer questions submitted by viewers. </p><p>The team will cover how effective prompt engineering can transform LLMs into workflow accelerators, and debate AI tool strengths— when to use Claude, ChatGPT, or Notebook LM.</p><p>They'll discuss the importance of human oversight when integrating AI into operations, highlighting the "human-in-the-loop" concept and include ways to explain AI to non-technical audiences.</p><p><br></p><p>#AI #promptengineering #CyberSecurity #Automation #SecurityOps #claudeai #chatgpt </p><p><br></p><p>00:00 - Welcome, Intro</p><p>02:00 - Q - How do you use AI?</p><p>02:55 - The importance of effective prompt engineering</p><p>10:24 - Upcoming workshop - AI Workflow Optimization for Red Teaming</p><p>12:10 - Q - Which AI for which task? Where should I invest my time?</p><p>14:12 - Claude for coding in Python &amp; Golang, but not great at Java</p><p>16:35 - Derek - Initial prompt improvement in Chat GPT, then go to Claude</p><p>17:37 - NotebookLM for students (https://notebooklm.google/)</p><p>20:01 - Invest your time in prompt engineering - applicable to any model</p><p>22:38 - Double check code, understand what it means, do not blindly trust AI output</p><p>25:17 - Q - How to discuss AI with a non-technical audience</p><p>28:08 - Talk to LLMs like a child</p><p>28:54 - AI is not sentient, it's just drawing relevant correlations</p><p>31:48 - Ask them clarifying questions - what are they trying to ask? What's the context?</p><p>33:37 - Q - How can you do "Human in the Loop?"</p><p>35:24 - Don't give your agentic AI too much power - treat it like a junior assistant</p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
    </item>
    <item>
      <title>Augmenting Red Teaming with AI- Episode 13</title>
      <itunes:episode>13</itunes:episode>
      <podcast:episode>13</podcast:episode>
      <itunes:title>Augmenting Red Teaming with AI- Episode 13</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b655b2b4-3064-4b96-a5ed-77547b26214d</guid>
      <link>https://share.transistor.fm/s/6e0236da</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p>https://poweredbybhis.com </p><p><br></p><p>Augmenting Red Teaming with AI | Episode 13</p><p><br></p><p>In Episode 13 of the AI Security Ops Podcast, hosts Joff Thyer, Derek Banks, and Brian Fehrman dive into the exciting world of **Agentic AI in Red Teaming**. </p><p><br></p><p>Discover how augmenting red teams with AI-driven tools helps automate penetration testing, tackle low-hanging fruit vulnerabilities, and provide comprehensive security coverage. The team discusses the importance of prompt engineering, maintaining human oversight, and navigating potential risks, including unintended actions by autonomous AI agents. </p><p><br></p><p>Tune in to explore how AI is reshaping cybersecurity and learn practical strategies to effectively integrate Agentic AI into your security assessments.</p><p><br></p><p>#AI #CyberSecurity #RedTeaming #AgenticAI #Automation #SecurityOps</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p>https://poweredbybhis.com </p><p><br></p><p>Augmenting Red Teaming with AI | Episode 13</p><p><br></p><p>In Episode 13 of the AI Security Ops Podcast, hosts Joff Thyer, Derek Banks, and Brian Fehrman dive into the exciting world of **Agentic AI in Red Teaming**. </p><p><br></p><p>Discover how augmenting red teams with AI-driven tools helps automate penetration testing, tackle low-hanging fruit vulnerabilities, and provide comprehensive security coverage. The team discusses the importance of prompt engineering, maintaining human oversight, and navigating potential risks, including unintended actions by autonomous AI agents. </p><p><br></p><p>Tune in to explore how AI is reshaping cybersecurity and learn practical strategies to effectively integrate Agentic AI into your security assessments.</p><p><br></p><p>#AI #CyberSecurity #RedTeaming #AgenticAI #Automation #SecurityOps</p>]]>
      </content:encoded>
      <pubDate>Thu, 24 Jul 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/6e0236da/29f7d09c.mp3" length="29304677" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/6Ras_FZuiw5H73jbR0YhscMIw69SoipFPcc8vF4bddY/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lYjk3/NjU3NGRjNGU2MTU3/NTM5OGEzZWJhODNh/YTdlMi5qcGc.jpg"/>
      <itunes:duration>1823</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p>https://poweredbybhis.com </p><p><br></p><p>Augmenting Red Teaming with AI | Episode 13</p><p><br></p><p>In Episode 13 of the AI Security Ops Podcast, hosts Joff Thyer, Derek Banks, and Brian Fehrman dive into the exciting world of **Agentic AI in Red Teaming**. </p><p><br></p><p>Discover how augmenting red teams with AI-driven tools helps automate penetration testing, tackle low-hanging fruit vulnerabilities, and provide comprehensive security coverage. The team discusses the importance of prompt engineering, maintaining human oversight, and navigating potential risks, including unintended actions by autonomous AI agents. </p><p><br></p><p>Tune in to explore how AI is reshaping cybersecurity and learn practical strategies to effectively integrate Agentic AI into your security assessments.</p><p><br></p><p>#AI #CyberSecurity #RedTeaming #AgenticAI #Automation #SecurityOps</p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
    </item>
    <item>
      <title>Global AI Laws and the Impact of GDPR – Episode 12</title>
      <itunes:episode>12</itunes:episode>
      <podcast:episode>12</podcast:episode>
      <itunes:title>Global AI Laws and the Impact of GDPR – Episode 12</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">07a2917f-ec7f-4f75-b504-c1e70d737562</guid>
      <link>https://share.transistor.fm/s/0fbd0ade</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Regulating the Machine: Global AI Laws and the Impact of GDPR | Episode 12</p><p><br></p><p>In Episode 12 the hosts discuss the complexities of regulating artificial intelligence (AI) technology across the globe. </p><p><br></p><p>Highlighting the rapid advancement of AI and its challenges for lawmakers, the episode explores how the GDPR framework in the European Union provides clear guidelines addressing AI-related issues like data privacy, consent, and accountability. The discussion also contrasts the European regulatory-first approach with the U.S.'s innovation-driven stance, considering implications for privacy, intellectual property, and technology advancement. Additionally, the podcast addresses the fragmented nature of AI regulations within U.S. states, emphasizing the need for effective information security practices, audit mechanisms, and risk management frameworks.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Regulating the Machine: Global AI Laws and the Impact of GDPR | Episode 12</p><p><br></p><p>In Episode 12 the hosts discuss the complexities of regulating artificial intelligence (AI) technology across the globe. </p><p><br></p><p>Highlighting the rapid advancement of AI and its challenges for lawmakers, the episode explores how the GDPR framework in the European Union provides clear guidelines addressing AI-related issues like data privacy, consent, and accountability. The discussion also contrasts the European regulatory-first approach with the U.S.'s innovation-driven stance, considering implications for privacy, intellectual property, and technology advancement. Additionally, the podcast addresses the fragmented nature of AI regulations within U.S. states, emphasizing the need for effective information security practices, audit mechanisms, and risk management frameworks.</p>]]>
      </content:encoded>
      <pubDate>Thu, 17 Jul 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/0fbd0ade/66f77447.mp3" length="25593316" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/GjOim9GcjzukIyMZrpnX6jS-tJvcUzMsyeaS4hmnpfQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zN2U5/ZTQ3Y2I0NjNkMjEz/YTFlYjliMmUwNzZm/YThkNi5qcGc.jpg"/>
      <itunes:duration>1591</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Regulating the Machine: Global AI Laws and the Impact of GDPR | Episode 12</p><p><br></p><p>In Episode 12 the hosts discuss the complexities of regulating artificial intelligence (AI) technology across the globe. </p><p><br></p><p>Highlighting the rapid advancement of AI and its challenges for lawmakers, the episode explores how the GDPR framework in the European Union provides clear guidelines addressing AI-related issues like data privacy, consent, and accountability. The discussion also contrasts the European regulatory-first approach with the U.S.'s innovation-driven stance, considering implications for privacy, intellectual property, and technology advancement. Additionally, the podcast addresses the fragmented nature of AI regulations within U.S. states, emphasizing the need for effective information security practices, audit mechanisms, and risk management frameworks.</p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
    </item>
    <item>
      <title>A.I. News of the Month – Episode 11</title>
      <itunes:episode>11</itunes:episode>
      <podcast:episode>11</podcast:episode>
      <itunes:title>A.I. News of the Month – Episode 11</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0c2348ae-1b01-4616-b2f1-e8906ba240c8</guid>
      <link>https://share.transistor.fm/s/5fdf4130</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a> </p><p><br></p><p>In this episode of AI Security Ops, we explore major AI news, including the Scale AI data leak impacting giants like Google and Meta, a novel jailbreak attack technique dubbed the Echo Chamber, and Anthropic's Claude-Gov, tailored for U.S. national security. We discuss ethical AI management solutions, the innovative use of AI to detect shoplifting via behavioral gestures, IBM's WatsonX platform, and critical insights into AI red teaming and SQL injection vulnerabilities affecting AI applications. </p><p><br></p><p>Join us as we uncover how traditional security practices remain crucial in today's AI-driven landscape.</p><p><br></p><p>News Links Referenced:</p><p>Scale AI exposed sensitive data about clients like Meta and xAI in public Google Docs, BI finds</p><p><a href="https://www.businessinsider.com/scale-ai-public-google-docs-security-2025-6">https://www.businessinsider.com/scale-ai-public-google-docs-security-2025-6</a></p><p><br></p><p>AI Security Turning Point: Echo Chamber Jailbreak Exposes Dangerous Blind Spot</p><p><a href="https://www.techrepublic.com/article/news-echo-chamber-jailbreak-manipulates-llms/">https://www.techrepublic.com/article/news-echo-chamber-jailbreak-manipulates-llms/</a></p><p><br></p><p>Anthropic's "Claude Gov" for National Security</p><p><a href="https://techcrunch.com/2025/06/05/anthropic-unveils-custom-ai-models-for-u-s-national-security-customers/">https://techcrunch.com/2025/06/05/anthropic-unveils-custom-ai-models-for-u-s-national-security-customers/</a></p><p><br></p><p>Veesion - AI That Catches Shoplifters by Their Gestures</p><p><a href="https://www.businessinsider.com/veesion-ai-tech-startup-shoplifting-prevention-alerts-security-suspicious-gestures-2025-6">https://www.businessinsider.com/veesion-ai-tech-startup-shoplifting-prevention-alerts-security-suspicious-gestures-2025-6</a></p><p><br></p><p>IBM's New Platform for Managing "Agentic AI"</p><p><a href="https://thejournal.com/articles/2025/06/24/ibm-launches-agentic-ai-governance-and-security-platform.aspx">https://thejournal.com/articles/2025/06/24/ibm-launches-agentic-ai-governance-and-security-platform.aspx</a></p><p><br></p><p>How a Classic Bug Can Poison Modern AI Agents</p><p><a href="https://www.trendmicro.com/en_us/research/25/f/why-a-classic-mcp-server-vulnerability-can-undermine-your-entire-ai-agent.html">https://www.trendmicro.com/en_us/research/25/f/why-a-classic-mcp-server-vulnerability-can-undermine-your-entire-ai-agent.html</a></p><p><br></p><p>The "False Sense of Security" in AI Red Teaming</p><p><a href="https://www.forbes.com/councils/forbestechcouncil/2025/06/16/the-false-sense-of-security-in-ai-red-teaming/">https://www.forbes.com/councils/forbestechcouncil/2025/06/16/the-false-sense-of-security-in-ai-red-teaming/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a> </p><p><br></p><p>In this episode of AI Security Ops, we explore major AI news, including the Scale AI data leak impacting giants like Google and Meta, a novel jailbreak attack technique dubbed the Echo Chamber, and Anthropic's Claude-Gov, tailored for U.S. national security. We discuss ethical AI management solutions, the innovative use of AI to detect shoplifting via behavioral gestures, IBM's WatsonX platform, and critical insights into AI red teaming and SQL injection vulnerabilities affecting AI applications. </p><p><br></p><p>Join us as we uncover how traditional security practices remain crucial in today's AI-driven landscape.</p><p><br></p><p>News Links Referenced:</p><p>Scale AI exposed sensitive data about clients like Meta and xAI in public Google Docs, BI finds</p><p><a href="https://www.businessinsider.com/scale-ai-public-google-docs-security-2025-6">https://www.businessinsider.com/scale-ai-public-google-docs-security-2025-6</a></p><p><br></p><p>AI Security Turning Point: Echo Chamber Jailbreak Exposes Dangerous Blind Spot</p><p><a href="https://www.techrepublic.com/article/news-echo-chamber-jailbreak-manipulates-llms/">https://www.techrepublic.com/article/news-echo-chamber-jailbreak-manipulates-llms/</a></p><p><br></p><p>Anthropic's "Claude Gov" for National Security</p><p><a href="https://techcrunch.com/2025/06/05/anthropic-unveils-custom-ai-models-for-u-s-national-security-customers/">https://techcrunch.com/2025/06/05/anthropic-unveils-custom-ai-models-for-u-s-national-security-customers/</a></p><p><br></p><p>Veesion - AI That Catches Shoplifters by Their Gestures</p><p><a href="https://www.businessinsider.com/veesion-ai-tech-startup-shoplifting-prevention-alerts-security-suspicious-gestures-2025-6">https://www.businessinsider.com/veesion-ai-tech-startup-shoplifting-prevention-alerts-security-suspicious-gestures-2025-6</a></p><p><br></p><p>IBM's New Platform for Managing "Agentic AI"</p><p><a href="https://thejournal.com/articles/2025/06/24/ibm-launches-agentic-ai-governance-and-security-platform.aspx">https://thejournal.com/articles/2025/06/24/ibm-launches-agentic-ai-governance-and-security-platform.aspx</a></p><p><br></p><p>How a Classic Bug Can Poison Modern AI Agents</p><p><a href="https://www.trendmicro.com/en_us/research/25/f/why-a-classic-mcp-server-vulnerability-can-undermine-your-entire-ai-agent.html">https://www.trendmicro.com/en_us/research/25/f/why-a-classic-mcp-server-vulnerability-can-undermine-your-entire-ai-agent.html</a></p><p><br></p><p>The "False Sense of Security" in AI Red Teaming</p><p><a href="https://www.forbes.com/councils/forbestechcouncil/2025/06/16/the-false-sense-of-security-in-ai-red-teaming/">https://www.forbes.com/councils/forbestechcouncil/2025/06/16/the-false-sense-of-security-in-ai-red-teaming/</a></p>]]>
      </content:encoded>
      <pubDate>Thu, 10 Jul 2025 05:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/5fdf4130/4248b508.mp3" length="34165328" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/wkhyqNB8kOK3TqzqmNyxXZcLL4tHBOlrF6hHDz8IaIo/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yMmJm/MThiOTkyYmM0MTcx/MjJkMTc0OTY3ODAy/NWRhMy5qcGc.jpg"/>
      <itunes:duration>2128</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a> </p><p><br></p><p>In this episode of AI Security Ops, we explore major AI news, including the Scale AI data leak impacting giants like Google and Meta, a novel jailbreak attack technique dubbed the Echo Chamber, and Anthropic's Claude-Gov, tailored for U.S. national security. We discuss ethical AI management solutions, the innovative use of AI to detect shoplifting via behavioral gestures, IBM's WatsonX platform, and critical insights into AI red teaming and SQL injection vulnerabilities affecting AI applications. </p><p><br></p><p>Join us as we uncover how traditional security practices remain crucial in today's AI-driven landscape.</p><p><br></p><p>News Links Referenced:</p><p>Scale AI exposed sensitive data about clients like Meta and xAI in public Google Docs, BI finds</p><p><a href="https://www.businessinsider.com/scale-ai-public-google-docs-security-2025-6">https://www.businessinsider.com/scale-ai-public-google-docs-security-2025-6</a></p><p><br></p><p>AI Security Turning Point: Echo Chamber Jailbreak Exposes Dangerous Blind Spot</p><p><a href="https://www.techrepublic.com/article/news-echo-chamber-jailbreak-manipulates-llms/">https://www.techrepublic.com/article/news-echo-chamber-jailbreak-manipulates-llms/</a></p><p><br></p><p>Anthropic's "Claude Gov" for National Security</p><p><a href="https://techcrunch.com/2025/06/05/anthropic-unveils-custom-ai-models-for-u-s-national-security-customers/">https://techcrunch.com/2025/06/05/anthropic-unveils-custom-ai-models-for-u-s-national-security-customers/</a></p><p><br></p><p>Veesion - AI That Catches Shoplifters by Their Gestures</p><p><a href="https://www.businessinsider.com/veesion-ai-tech-startup-shoplifting-prevention-alerts-security-suspicious-gestures-2025-6">https://www.businessinsider.com/veesion-ai-tech-startup-shoplifting-prevention-alerts-security-suspicious-gestures-2025-6</a></p><p><br></p><p>IBM's New Platform for Managing "Agentic AI"</p><p><a href="https://thejournal.com/articles/2025/06/24/ibm-launches-agentic-ai-governance-and-security-platform.aspx">https://thejournal.com/articles/2025/06/24/ibm-launches-agentic-ai-governance-and-security-platform.aspx</a></p><p><br></p><p>How a Classic Bug Can Poison Modern AI Agents</p><p><a href="https://www.trendmicro.com/en_us/research/25/f/why-a-classic-mcp-server-vulnerability-can-undermine-your-entire-ai-agent.html">https://www.trendmicro.com/en_us/research/25/f/why-a-classic-mcp-server-vulnerability-can-undermine-your-entire-ai-agent.html</a></p><p><br></p><p>The "False Sense of Security" in AI Red Teaming</p><p><a href="https://www.forbes.com/councils/forbestechcouncil/2025/06/16/the-false-sense-of-security-in-ai-red-teaming/">https://www.forbes.com/councils/forbestechcouncil/2025/06/16/the-false-sense-of-security-in-ai-red-teaming/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
    </item>
    <item>
      <title>Agentic AI Threats, challenges, and Defenses | Episode 10</title>
      <itunes:episode>10</itunes:episode>
      <podcast:episode>10</podcast:episode>
      <itunes:title>Agentic AI Threats, challenges, and Defenses | Episode 10</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">39f0b2b8-80d2-45ac-8d9a-3e26c8a8cb22</guid>
      <link>https://share.transistor.fm/s/7469e88d</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p>Explore the rising security risks and challenges associated with agentic AI in Episode 10 of AI Security Ops. </p><p>Join Cybersecurity experts Joff Thyer, Bronwen Aker, Derek Banks, and Brian Ferhman as they unpack the complexities of AI gaining autonomy and agency. This episode covers key topics such as defining agentic AI, real-world vulnerabilities like prompt injection, potential implications for cybersecurity, and effective mitigation strategies like implementing guardrails and maintaining granular logging. </p><p>Valuable information for cybersecurity professionals, AI developers, and anyone interested in the future of artificial intelligence security.</p><p>#AgenticAI #AISecurity #Cybersecurity #LLMs #PromptInjection #RedTeaming #AIrisks<br>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker -<a href="http://blackhillsinfosec.com/team/bronwen-aker/"> http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p>Explore the rising security risks and challenges associated with agentic AI in Episode 10 of AI Security Ops. </p><p>Join Cybersecurity experts Joff Thyer, Bronwen Aker, Derek Banks, and Brian Ferhman as they unpack the complexities of AI gaining autonomy and agency. This episode covers key topics such as defining agentic AI, real-world vulnerabilities like prompt injection, potential implications for cybersecurity, and effective mitigation strategies like implementing guardrails and maintaining granular logging. </p><p>Valuable information for cybersecurity professionals, AI developers, and anyone interested in the future of artificial intelligence security.</p><p>#AgenticAI #AISecurity #Cybersecurity #LLMs #PromptInjection #RedTeaming #AIrisks<br>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker -<a href="http://blackhillsinfosec.com/team/bronwen-aker/"> http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </content:encoded>
      <pubDate>Thu, 03 Jul 2025 12:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/7469e88d/e23199ac.mp3" length="35804448" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/rAlwBDZo9gF9I-0z0TyIxeB9NgMzKMFszTO-K4JztAk/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNTVm/OWIwNDcxMmEyOGQy/MmVkYzM2MGZiN2E1/ZGVmNi5wbmc.jpg"/>
      <itunes:duration>2230</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p>Explore the rising security risks and challenges associated with agentic AI in Episode 10 of AI Security Ops. </p><p>Join Cybersecurity experts Joff Thyer, Bronwen Aker, Derek Banks, and Brian Ferhman as they unpack the complexities of AI gaining autonomy and agency. This episode covers key topics such as defining agentic AI, real-world vulnerabilities like prompt injection, potential implications for cybersecurity, and effective mitigation strategies like implementing guardrails and maintaining granular logging. </p><p>Valuable information for cybersecurity professionals, AI developers, and anyone interested in the future of artificial intelligence security.</p><p>#AgenticAI #AISecurity #Cybersecurity #LLMs #PromptInjection #RedTeaming #AIrisks<br>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker -<a href="http://blackhillsinfosec.com/team/bronwen-aker/"> http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
    </item>
    <item>
      <title>AI Model Usage and Comparisons – Episode 9</title>
      <itunes:episode>9</itunes:episode>
      <podcast:episode>9</podcast:episode>
      <itunes:title>AI Model Usage and Comparisons – Episode 9</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7e21fd06-61e7-46ff-88e0-523f61102aa7</guid>
      <link>https://share.transistor.fm/s/330f0127</link>
      <description>
        <![CDATA[<p>Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Episode 9 of AI Security Ops! AI Model Usage and Comparisons</p><p><br></p><p>In this exciting episode, we explore practical uses and comparisons of popular AI models including OpenAI, Claude, Gemini, and Copilot. </p><p><br></p><p>Join our expert panelists as they discuss personal workflows, share experiences with AI-driven coding and text processing, and examine strengths and weaknesses of these powerful technologies. Discover insights into the exponential growth of AI capabilities, the emerging specialization of models, and practical advice for effectively integrating AI tools into your cybersecurity practices. </p><p><br></p><p>Tune in to stay ahead in the rapidly evolving landscape of AI and cybersecurity.</p><p><br></p><p>#AISecurityOps #AIModels #Cybersecurity #OpenAI #ClaudeAI #GeminiAI #Copilot #AITools #ArtificialIntelligence #TechTrends #AIInsights #CyberSec</p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker -<a href="http://blackhillsinfosec.com/team/bronwen-aker/"> http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Episode 9 of AI Security Ops! AI Model Usage and Comparisons</p><p><br></p><p>In this exciting episode, we explore practical uses and comparisons of popular AI models including OpenAI, Claude, Gemini, and Copilot. </p><p><br></p><p>Join our expert panelists as they discuss personal workflows, share experiences with AI-driven coding and text processing, and examine strengths and weaknesses of these powerful technologies. Discover insights into the exponential growth of AI capabilities, the emerging specialization of models, and practical advice for effectively integrating AI tools into your cybersecurity practices. </p><p><br></p><p>Tune in to stay ahead in the rapidly evolving landscape of AI and cybersecurity.</p><p><br></p><p>#AISecurityOps #AIModels #Cybersecurity #OpenAI #ClaudeAI #GeminiAI #Copilot #AITools #ArtificialIntelligence #TechTrends #AIInsights #CyberSec</p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker -<a href="http://blackhillsinfosec.com/team/bronwen-aker/"> http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </content:encoded>
      <pubDate>Thu, 26 Jun 2025 05:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/330f0127/0c561dff.mp3" length="14034938" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/csRK0-oebbc4KFYjT_mOisSPI2AiJ_7W-YmmQwXAbhE/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYzRk/YzQ3MzViMzc2MjA5/ODZkMGU1M2VkMWQ4/MDVmZC5qcGc.jpg"/>
      <itunes:duration>852</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p><br></p><p>Episode 9 of AI Security Ops! AI Model Usage and Comparisons</p><p><br></p><p>In this exciting episode, we explore practical uses and comparisons of popular AI models including OpenAI, Claude, Gemini, and Copilot. </p><p><br></p><p>Join our expert panelists as they discuss personal workflows, share experiences with AI-driven coding and text processing, and examine strengths and weaknesses of these powerful technologies. Discover insights into the exponential growth of AI capabilities, the emerging specialization of models, and practical advice for effectively integrating AI tools into your cybersecurity practices. </p><p><br></p><p>Tune in to stay ahead in the rapidly evolving landscape of AI and cybersecurity.</p><p><br></p><p>#AISecurityOps #AIModels #Cybersecurity #OpenAI #ClaudeAI #GeminiAI #Copilot #AITools #ArtificialIntelligence #TechTrends #AIInsights #CyberSec</p><p>----------------------------------------------------------------------------------------------</p><p>Joff Thyer - <a href="https://blackhillsinfosec.com/team/joff-thyer/">https://blackhillsinfosec.com/team/joff-thyer/</a></p><p>Derek Banks - <a href="https://www.blackhillsinfosec.com/team/derek-banks/">https://www.blackhillsinfosec.com/team/derek-banks/</a></p><p>Brian Fehrman - <a href="https://www.blackhillsinfosec.com/team/brian-fehrman/">https://www.blackhillsinfosec.com/team/brian-fehrman/</a></p><p>Bronwen Aker -<a href="http://blackhillsinfosec.com/team/bronwen-aker/"> http://blackhillsinfosec.com/team/bronwen-aker/</a></p><p>Ben Bowman - <a href="https://www.blackhillsinfosec.com/team/ben-bowman/">https://www.blackhillsinfosec.com/team/ben-bowman/</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
    </item>
    <item>
      <title>AEO vs SEO | Episode 8</title>
      <itunes:episode>8</itunes:episode>
      <podcast:episode>8</podcast:episode>
      <itunes:title>AEO vs SEO | Episode 8</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">665dc941-b298-4868-9023-0c1382e4178f</guid>
      <link>https://share.transistor.fm/s/9e55e4f2</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com </a></p><p><br></p><p>AEO vs SEO | Episode 8</p><p><br></p><p>Explore how Artificial Intelligence (AI) is revolutionizing online search in this insightful episode of the AI Security Ops Podcast. </p><p><br></p><p>Learn about Search Engine Optimization (SEO) versus Answer Engine Optimization (AEO), and understand the shift from link-based results to rich, AI-driven answers. Discover the security challenges and ethical implications surrounding the use of AI in search engines, including risks like misinformation, deepfakes, and data privacy concerns. Gain practical insights on how critical thinking and verification are becoming essential skills in navigating this new era of AI-enhanced search.</p><p><br></p><p>#SEO #AEO #ArtificialIntelligence #Cybersecurity #AI #InformationSecurity #SearchEngines #AIOptimization #OnlineSecurity #DigitalPrivacy</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com </a></p><p><br></p><p>AEO vs SEO | Episode 8</p><p><br></p><p>Explore how Artificial Intelligence (AI) is revolutionizing online search in this insightful episode of the AI Security Ops Podcast. </p><p><br></p><p>Learn about Search Engine Optimization (SEO) versus Answer Engine Optimization (AEO), and understand the shift from link-based results to rich, AI-driven answers. Discover the security challenges and ethical implications surrounding the use of AI in search engines, including risks like misinformation, deepfakes, and data privacy concerns. Gain practical insights on how critical thinking and verification are becoming essential skills in navigating this new era of AI-enhanced search.</p><p><br></p><p>#SEO #AEO #ArtificialIntelligence #Cybersecurity #AI #InformationSecurity #SearchEngines #AIOptimization #OnlineSecurity #DigitalPrivacy</p>]]>
      </content:encoded>
      <pubDate>Thu, 19 Jun 2025 12:42:10 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/9e55e4f2/b43c203f.mp3" length="29265513" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/rDBnlrBWNjfBue3n5aCs1Vk1K6qQudcA_ZMnJrg3VH8/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81OGZk/YmJkNmMzOGJjMTky/ZDQzMTgzNTU0ZGQw/NDRlMS5qcGc.jpg"/>
      <itunes:duration>1821</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com </a></p><p><br></p><p>AEO vs SEO | Episode 8</p><p><br></p><p>Explore how Artificial Intelligence (AI) is revolutionizing online search in this insightful episode of the AI Security Ops Podcast. </p><p><br></p><p>Learn about Search Engine Optimization (SEO) versus Answer Engine Optimization (AEO), and understand the shift from link-based results to rich, AI-driven answers. Discover the security challenges and ethical implications surrounding the use of AI in search engines, including risks like misinformation, deepfakes, and data privacy concerns. Gain practical insights on how critical thinking and verification are becoming essential skills in navigating this new era of AI-enhanced search.</p><p><br></p><p>#SEO #AEO #ArtificialIntelligence #Cybersecurity #AI #InformationSecurity #SearchEngines #AIOptimization #OnlineSecurity #DigitalPrivacy</p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
    </item>
    <item>
      <title>R.A.G. [Retrieval Augmented Generation] – Episode 7</title>
      <itunes:episode>7</itunes:episode>
      <podcast:episode>7</podcast:episode>
      <itunes:title>R.A.G. [Retrieval Augmented Generation] – Episode 7</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">70c14271-b934-404e-86a0-ccdfc64d2340</guid>
      <link>https://share.transistor.fm/s/f3cfc36e</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p> R.A.G. (Retrieval Augmented Generation) is a powerful technique for enhancing Large Language Model (LLM) outputs with real-time, external data. RAG bridges the gap between static model knowledge and dynamic, context-aware responses.</p><p><br></p><p>Join hosts Brian Fehrman, Derek Banks, Bronwen Aker, and Ben Bowman as they break down how RAG improves the reliability and relevance of generative AI systems. You’ll learn why context retrieval matters, what problems RAG solves, and where it fits into modern AI security practices.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p> R.A.G. (Retrieval Augmented Generation) is a powerful technique for enhancing Large Language Model (LLM) outputs with real-time, external data. RAG bridges the gap between static model knowledge and dynamic, context-aware responses.</p><p><br></p><p>Join hosts Brian Fehrman, Derek Banks, Bronwen Aker, and Ben Bowman as they break down how RAG improves the reliability and relevance of generative AI systems. You’ll learn why context retrieval matters, what problems RAG solves, and where it fits into modern AI security practices.</p>]]>
      </content:encoded>
      <pubDate>Thu, 12 Jun 2025 05:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/f3cfc36e/b55b801e.mp3" length="25965938" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/5Ims0yjsVb3MGh-xMf0QI5JsOvfmjkIoQ68DnHQqr6w/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kMmU5/ZDIzZDBjMDg5MzIw/OWU2ZTJkYjk1ZmUz/Nzg0Yy5qcGc.jpg"/>
      <itunes:duration>1615</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p> R.A.G. (Retrieval Augmented Generation) is a powerful technique for enhancing Large Language Model (LLM) outputs with real-time, external data. RAG bridges the gap between static model knowledge and dynamic, context-aware responses.</p><p><br></p><p>Join hosts Brian Fehrman, Derek Banks, Bronwen Aker, and Ben Bowman as they break down how RAG improves the reliability and relevance of generative AI systems. You’ll learn why context retrieval matters, what problems RAG solves, and where it fits into modern AI security practices.</p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/ben-bowman/" img="https://img.transistorcdn.com/6LggyMnvVa4jykOYcpApxR0fy-Yq37ZgfswwJo1Jakc/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zYjk2/ZDg2NmMxN2I2YzU5/NmZlMGVkMzcwMDk0/NThkMy5qcGc.jpg">Ben Bowman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
    </item>
    <item>
      <title>LLM Guardrails | Episode 6</title>
      <itunes:episode>6</itunes:episode>
      <podcast:episode>6</podcast:episode>
      <itunes:title>LLM Guardrails | Episode 6</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">667c77f8-d4fd-42e6-9c5b-27531b272044</guid>
      <link>https://share.transistor.fm/s/f3d698e7</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com </a></p><p><br></p><p>Episode 6: LLM Guardrails</p><p><br></p><p>We dive deep into the evolving world of LLM guardrails. </p><p><br></p><p>We explore why guardrails are essential for securing large language models, the challenges of implementing them effectively, and how current approaches often resemble the patchwork fixes of early InfoSec days. From input/output filtering and prompt injection defenses to the emerging trend of LLMs guarding other LLMs, we analyze real-world assessments, highlight security pitfalls, and discuss the need for layered, deterministic defenses.</p><p><br></p><p>Plus, Brian Teases the next [ segments ] episode utilizing Prompt Guard within open web pipelines.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com </a></p><p><br></p><p>Episode 6: LLM Guardrails</p><p><br></p><p>We dive deep into the evolving world of LLM guardrails. </p><p><br></p><p>We explore why guardrails are essential for securing large language models, the challenges of implementing them effectively, and how current approaches often resemble the patchwork fixes of early InfoSec days. From input/output filtering and prompt injection defenses to the emerging trend of LLMs guarding other LLMs, we analyze real-world assessments, highlight security pitfalls, and discuss the need for layered, deterministic defenses.</p><p><br></p><p>Plus, Brian Teases the next [ segments ] episode utilizing Prompt Guard within open web pipelines.</p>]]>
      </content:encoded>
      <pubDate>Thu, 05 Jun 2025 17:02:58 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/f3d698e7/ebba4fe1.mp3" length="21584589" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/KQEoRTZU3eq7Nm89WTw0oFx82KcwOthqS58vHu8FSYM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kMDRh/NjY3NGY3ZjEzNTMy/OWJiMDlmZjBkNWUx/N2IxMC5qcGc.jpg"/>
      <itunes:duration>1341</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – </p><p><a href="https://poweredbybhis.com">https://poweredbybhis.com </a></p><p><br></p><p>Episode 6: LLM Guardrails</p><p><br></p><p>We dive deep into the evolving world of LLM guardrails. </p><p><br></p><p>We explore why guardrails are essential for securing large language models, the challenges of implementing them effectively, and how current approaches often resemble the patchwork fixes of early InfoSec days. From input/output filtering and prompt injection defenses to the emerging trend of LLMs guarding other LLMs, we analyze real-world assessments, highlight security pitfalls, and discuss the need for layered, deterministic defenses.</p><p><br></p><p>Plus, Brian Teases the next [ segments ] episode utilizing Prompt Guard within open web pipelines.</p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/ben-bowman/" img="https://img.transistorcdn.com/6LggyMnvVa4jykOYcpApxR0fy-Yq37ZgfswwJo1Jakc/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zYjk2/ZDg2NmMxN2I2YzU5/NmZlMGVkMzcwMDk0/NThkMy5qcGc.jpg">Ben Bowman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
    </item>
    <item>
      <title>Harmful Content | Episode 5</title>
      <itunes:episode>5</itunes:episode>
      <podcast:episode>5</podcast:episode>
      <itunes:title>Harmful Content | Episode 5</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7510fe6a-cda7-49a6-8a23-4a98048826e7</guid>
      <link>https://share.transistor.fm/s/94ac0c7a</link>
      <description>
        <![CDATA[<p>ChatGTP created summary, because of course we're gonna use A.I. on our A.I. podcast:</p><p><br></p><p>In this episode of the AI Security Ops podcast, the panel discusses the challenges and risks of harmful content generated by AI, particularly focusing on generative models like GPT. They explore how powerful prompt engineering can lead to the creation of misleading or dangerous outputs, and highlight the importance of detection methods, ethical oversight, and regulatory standards. The conversation emphasizes the need for responsible use of AI, stressing that while these models are incredibly capable, safeguards and human accountability are essential to prevent misuse.</p><p><br></p><p>Is this summary misleading?</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>ChatGTP created summary, because of course we're gonna use A.I. on our A.I. podcast:</p><p><br></p><p>In this episode of the AI Security Ops podcast, the panel discusses the challenges and risks of harmful content generated by AI, particularly focusing on generative models like GPT. They explore how powerful prompt engineering can lead to the creation of misleading or dangerous outputs, and highlight the importance of detection methods, ethical oversight, and regulatory standards. The conversation emphasizes the need for responsible use of AI, stressing that while these models are incredibly capable, safeguards and human accountability are essential to prevent misuse.</p><p><br></p><p>Is this summary misleading?</p>]]>
      </content:encoded>
      <pubDate>Thu, 22 May 2025 15:30:24 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/94ac0c7a/4cccb3a8.mp3" length="35737415" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/1uSdf4-czFJy4NO5Gsc8d2vlTm8eUmtGAVQesGy07aQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wYTFj/ODgwZjY5ZjFhY2Fk/NDRkOTEzNjhkYjUy/NjUyZi5wbmc.jpg"/>
      <itunes:duration>2209</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>ChatGTP created summary, because of course we're gonna use A.I. on our A.I. podcast:</p><p><br></p><p>In this episode of the AI Security Ops podcast, the panel discusses the challenges and risks of harmful content generated by AI, particularly focusing on generative models like GPT. They explore how powerful prompt engineering can lead to the creation of misleading or dangerous outputs, and highlight the importance of detection methods, ethical oversight, and regulatory standards. The conversation emphasizes the need for responsible use of AI, stressing that while these models are incredibly capable, safeguards and human accountability are essential to prevent misuse.</p><p><br></p><p>Is this summary misleading?</p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/ben-bowman/" img="https://img.transistorcdn.com/6LggyMnvVa4jykOYcpApxR0fy-Yq37ZgfswwJo1Jakc/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zYjk2/ZDg2NmMxN2I2YzU5/NmZlMGVkMzcwMDk0/NThkMy5qcGc.jpg">Ben Bowman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
    </item>
    <item>
      <title>A.I. News of the month</title>
      <itunes:episode>4</itunes:episode>
      <podcast:episode>4</podcast:episode>
      <itunes:title>A.I. News of the month</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c7c57784-c779-4ca2-9e30-03e5b8f57282</guid>
      <link>https://share.transistor.fm/s/e58337f8</link>
      <description>
        <![CDATA[<p>In this episode, we dive into how AI is revolutionizing cybersecurity—especially in spam detection using classic machine learning models like logistic regression and support vector machines. Join us as we explore real-world applications, teaching approaches in AI courses, and why your spam folder is smarter than ever.</p><p><br></p><p>Topics :</p><ul><li>AI in email spam detection</li><li>Teaching machine learning through real datasets</li><li>NLP's role in cybersecurity</li><li>Behind-the-scenes on building practical AI models</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we dive into how AI is revolutionizing cybersecurity—especially in spam detection using classic machine learning models like logistic regression and support vector machines. Join us as we explore real-world applications, teaching approaches in AI courses, and why your spam folder is smarter than ever.</p><p><br></p><p>Topics :</p><ul><li>AI in email spam detection</li><li>Teaching machine learning through real datasets</li><li>NLP's role in cybersecurity</li><li>Behind-the-scenes on building practical AI models</li></ul>]]>
      </content:encoded>
      <pubDate>Thu, 15 May 2025 16:39:33 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/e58337f8/eec5f3b4.mp3" length="32349604" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/ixLJsVlaJIHRz-ZAqsn2Lo2q__uOervSuoOKmVY8h7Q/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iY2Q2/ODRmZmQyOWQ0ZGIy/MGU0ZmM4MTc5YmRk/NWViNy5wbmc.jpg"/>
      <itunes:duration>1990</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, we dive into how AI is revolutionizing cybersecurity—especially in spam detection using classic machine learning models like logistic regression and support vector machines. Join us as we explore real-world applications, teaching approaches in AI courses, and why your spam folder is smarter than ever.</p><p><br></p><p>Topics :</p><ul><li>AI in email spam detection</li><li>Teaching machine learning through real datasets</li><li>NLP's role in cybersecurity</li><li>Behind-the-scenes on building practical AI models</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/ben-bowman/" img="https://img.transistorcdn.com/6LggyMnvVa4jykOYcpApxR0fy-Yq37ZgfswwJo1Jakc/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zYjk2/ZDg2NmMxN2I2YzU5/NmZlMGVkMzcwMDk0/NThkMy5qcGc.jpg">Ben Bowman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/bronwen-aker/" img="https://img.transistorcdn.com/0luJs5_XfVVduE7Lwtr94vu1KDtOHd1UfPgq3R8MwgU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzdh/YmM0ZjU2NGExNGYy/NzM0M2UzODIzMmZh/ZGZlMi5qcGc.jpg">Bronwen Aker</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
    </item>
    <item>
      <title>AI Deepfakes</title>
      <itunes:episode>3</itunes:episode>
      <podcast:episode>3</podcast:episode>
      <itunes:title>AI Deepfakes</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fde6de09-c3f1-448c-b6d0-977999c39b4e</guid>
      <link>https://share.transistor.fm/s/3e8c9704</link>
      <description>
        <![CDATA[<p>Welcome to another thought-provoking episode of <em>AI Security Ops</em>, hosted by Joff Thyer alongside Brian Fehrman and Derek Banks. In this episode, we dive deep into one of the most alarming developments in artificial intelligence—<strong>AI-generated deepfakes</strong>.</p><p>🔍 <strong>What We Cover:</strong></p><ul><li>What deepfakes are and how they’re created using generative adversarial networks (GANs) and diffusion models</li><li>Real-world deepfake incidents, including multimillion-dollar fraud</li><li>The growing accessibility of deepfake tools and the implications for social engineering</li><li>Detection and mitigation strategies: How to spot a deepfake and protect yourself or your organization</li><li>Ethical and legal challenges in legislating deepfake technology</li><li>Best practices for experimenting responsibly with deepfake tools</li></ul><p>⚠️ With AI making deepfakes more realistic and accessible than ever, this isn’t just a tech curiosity—it’s a major infosec concern. Whether you're a cybersecurity pro, a tech enthusiast, or just curious about AI's darker side, this episode is a must-watch.</p><p>💬 Don’t forget to LIKE, COMMENT, and SUBSCRIBE for more insights on AI and cybersecurity!</p><p>#AI #Deepfakes #CyberSecurity #InfoSec #SocialEngineering #GenerativeAI #EthicalAI #AITrends #Podcast #AIForGood #BlackHillsInfoSec</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Welcome to another thought-provoking episode of <em>AI Security Ops</em>, hosted by Joff Thyer alongside Brian Fehrman and Derek Banks. In this episode, we dive deep into one of the most alarming developments in artificial intelligence—<strong>AI-generated deepfakes</strong>.</p><p>🔍 <strong>What We Cover:</strong></p><ul><li>What deepfakes are and how they’re created using generative adversarial networks (GANs) and diffusion models</li><li>Real-world deepfake incidents, including multimillion-dollar fraud</li><li>The growing accessibility of deepfake tools and the implications for social engineering</li><li>Detection and mitigation strategies: How to spot a deepfake and protect yourself or your organization</li><li>Ethical and legal challenges in legislating deepfake technology</li><li>Best practices for experimenting responsibly with deepfake tools</li></ul><p>⚠️ With AI making deepfakes more realistic and accessible than ever, this isn’t just a tech curiosity—it’s a major infosec concern. Whether you're a cybersecurity pro, a tech enthusiast, or just curious about AI's darker side, this episode is a must-watch.</p><p>💬 Don’t forget to LIKE, COMMENT, and SUBSCRIBE for more insights on AI and cybersecurity!</p><p>#AI #Deepfakes #CyberSecurity #InfoSec #SocialEngineering #GenerativeAI #EthicalAI #AITrends #Podcast #AIForGood #BlackHillsInfoSec</p>]]>
      </content:encoded>
      <pubDate>Mon, 28 Apr 2025 10:00:00 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/3e8c9704/24f1dff0.mp3" length="27971402" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/P5zZgsLVEKgdXYjcAHmYOXjwvm01mIJN43tusuBtCQg/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85ZTll/NzcwZDJmZjllMGFm/MDA1ZjBjNjZlOGE0/NDRhYS5wbmc.jpg"/>
      <itunes:duration>1749</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Welcome to another thought-provoking episode of <em>AI Security Ops</em>, hosted by Joff Thyer alongside Brian Fehrman and Derek Banks. In this episode, we dive deep into one of the most alarming developments in artificial intelligence—<strong>AI-generated deepfakes</strong>.</p><p>🔍 <strong>What We Cover:</strong></p><ul><li>What deepfakes are and how they’re created using generative adversarial networks (GANs) and diffusion models</li><li>Real-world deepfake incidents, including multimillion-dollar fraud</li><li>The growing accessibility of deepfake tools and the implications for social engineering</li><li>Detection and mitigation strategies: How to spot a deepfake and protect yourself or your organization</li><li>Ethical and legal challenges in legislating deepfake technology</li><li>Best practices for experimenting responsibly with deepfake tools</li></ul><p>⚠️ With AI making deepfakes more realistic and accessible than ever, this isn’t just a tech curiosity—it’s a major infosec concern. Whether you're a cybersecurity pro, a tech enthusiast, or just curious about AI's darker side, this episode is a must-watch.</p><p>💬 Don’t forget to LIKE, COMMENT, and SUBSCRIBE for more insights on AI and cybersecurity!</p><p>#AI #Deepfakes #CyberSecurity #InfoSec #SocialEngineering #GenerativeAI #EthicalAI #AITrends #Podcast #AIForGood #BlackHillsInfoSec</p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
    </item>
    <item>
      <title>Introduction to Prompt Injection</title>
      <itunes:episode>2</itunes:episode>
      <podcast:episode>2</podcast:episode>
      <itunes:title>Introduction to Prompt Injection</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ba893b49-41ec-4e29-b9ee-ad4cf6a5e677</guid>
      <link>https://share.transistor.fm/s/34f56f71</link>
      <description>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p>Welcome to Episode 2 of AI Security Ops! </p><p>In this episode, Joff Thyer, Derek Banks, Brian Fehrman, and Ben "The Heretic" Bowman take a deep dive into Prompt Injection — one of the most fascinating and misunderstood attack techniques in the AI space.</p><p>We break down: <br>🛠️ What large language models (LLMs) are and how they work <br>💣 What prompt injection is, and why it matters for AI security <br>🎭 How attackers manipulate system prompts and personas <br>🔐 The difference between prompt injection and jailbreaking <br>👩‍💻 Practical examples, stories, and hands-on resources you can explore <br>🎯 How to start your journey as an AI hacker and why web app pen testing skills are more relevant than ever</p><p>Plus: <br>👉 Real-world cases of prompt attacks on Bing, Amazon, and more<br>👉 Tools and labs you can play with right now to test your skills<br>👉 Be sure to check out this weeks Tech Demo on YouTube!</p><p>Brought to you by the cybersecurity experts at Black Hills Information Security<br><a href="https://blackhillsinfosec.com/">https://blackhillsinfosec.com</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p>Welcome to Episode 2 of AI Security Ops! </p><p>In this episode, Joff Thyer, Derek Banks, Brian Fehrman, and Ben "The Heretic" Bowman take a deep dive into Prompt Injection — one of the most fascinating and misunderstood attack techniques in the AI space.</p><p>We break down: <br>🛠️ What large language models (LLMs) are and how they work <br>💣 What prompt injection is, and why it matters for AI security <br>🎭 How attackers manipulate system prompts and personas <br>🔐 The difference between prompt injection and jailbreaking <br>👩‍💻 Practical examples, stories, and hands-on resources you can explore <br>🎯 How to start your journey as an AI hacker and why web app pen testing skills are more relevant than ever</p><p>Plus: <br>👉 Real-world cases of prompt attacks on Bing, Amazon, and more<br>👉 Tools and labs you can play with right now to test your skills<br>👉 Be sure to check out this weeks Tech Demo on YouTube!</p><p>Brought to you by the cybersecurity experts at Black Hills Information Security<br><a href="https://blackhillsinfosec.com/">https://blackhillsinfosec.com</a></p>]]>
      </content:encoded>
      <pubDate>Wed, 23 Apr 2025 00:33:45 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/34f56f71/bfcb799f.mp3" length="22121247" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:duration>1383</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>🔗 Register for FREE Infosec Webcasts, Anti-casts &amp; Summits – <br><a href="https://poweredbybhis.com">https://poweredbybhis.com</a></p><p>Welcome to Episode 2 of AI Security Ops! </p><p>In this episode, Joff Thyer, Derek Banks, Brian Fehrman, and Ben "The Heretic" Bowman take a deep dive into Prompt Injection — one of the most fascinating and misunderstood attack techniques in the AI space.</p><p>We break down: <br>🛠️ What large language models (LLMs) are and how they work <br>💣 What prompt injection is, and why it matters for AI security <br>🎭 How attackers manipulate system prompts and personas <br>🔐 The difference between prompt injection and jailbreaking <br>👩‍💻 Practical examples, stories, and hands-on resources you can explore <br>🎯 How to start your journey as an AI hacker and why web app pen testing skills are more relevant than ever</p><p>Plus: <br>👉 Real-world cases of prompt attacks on Bing, Amazon, and more<br>👉 Tools and labs you can play with right now to test your skills<br>👉 Be sure to check out this weeks Tech Demo on YouTube!</p><p>Brought to you by the cybersecurity experts at Black Hills Information Security<br><a href="https://blackhillsinfosec.com/">https://blackhillsinfosec.com</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/ben-bowman/" img="https://img.transistorcdn.com/6LggyMnvVa4jykOYcpApxR0fy-Yq37ZgfswwJo1Jakc/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zYjk2/ZDg2NmMxN2I2YzU5/NmZlMGVkMzcwMDk0/NThkMy5qcGc.jpg">Ben Bowman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
    </item>
    <item>
      <title>Why is AI Security Important?</title>
      <itunes:episode>1</itunes:episode>
      <podcast:episode>1</podcast:episode>
      <itunes:title>Why is AI Security Important?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c06c4735-8f31-46ae-a4bb-0b6550c93725</guid>
      <link>https://share.transistor.fm/s/6eb12f44</link>
      <description>
        <![CDATA[<p>Welcome to the first episode of AI Security Ops! This week, join Brian Fehrman, Derek Banks, and Joff Thyer as they dive into why AI security matters more than ever. From how large language models work to the risks of prompt injection, jailbreaking, and AI-powered social engineering, this episode unpacks the challenges and opportunities at the intersection of AI and cybersecurity.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Welcome to the first episode of AI Security Ops! This week, join Brian Fehrman, Derek Banks, and Joff Thyer as they dive into why AI security matters more than ever. From how large language models work to the risks of prompt injection, jailbreaking, and AI-powered social engineering, this episode unpacks the challenges and opportunities at the intersection of AI and cybersecurity.</p>]]>
      </content:encoded>
      <pubDate>Thu, 17 Apr 2025 16:46:57 -0400</pubDate>
      <author>Black Hills Information Security</author>
      <enclosure url="https://media.transistor.fm/6eb12f44/ec372d94.mp3" length="45549946" type="audio/mpeg"/>
      <itunes:author>Black Hills Information Security</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/u390QZbdc50wEEFw8Dr_Cf-EMSUC5AZ7HIWmsuoQUHE/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mY2Ew/Yjk1ODM1ZTU1ZGNi/OWJkNDJiYTIxNjhm/NDM4NS5wbmc.jpg"/>
      <itunes:duration>2847</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Welcome to the first episode of AI Security Ops! This week, join Brian Fehrman, Derek Banks, and Joff Thyer as they dive into why AI security matters more than ever. From how large language models work to the risks of prompt injection, jailbreaking, and AI-powered social engineering, this episode unpacks the challenges and opportunities at the intersection of AI and cybersecurity.</p>]]>
      </itunes:summary>
      <itunes:keywords>Cyber Security, Artificial Intelligence, A.I.,AI,infosec</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/brian-fehrman/" img="https://img.transistorcdn.com/oikpAZ4UU5LopPNkV4npySDzFUn4Xvw096GL-X9GBrY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGFh/Yzc4MjA5YTNhZTU5/Y2EyNGM3ZjBjMTZh/N2RiYy5qcGc.jpg">Brian Fehrman</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/derek-banks/" img="https://img.transistorcdn.com/thNu_Y9x4QwJPuV2iW-WAshGqkHeAMH_NISYP_g3Azg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjJi/OTA0NjBkMmE3MWFk/M2RlYTNlZjM5ZmYz/NjViOS5qcGc.jpg">Derek Banks</podcast:person>
      <podcast:person role="Host" href="https://www.blackhillsinfosec.com/team/joff-thyer/" img="https://img.transistorcdn.com/cY8DBReBpi0fj0VVB4FbDxZBqTi26Gwb0PtC9qTmgQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTY3/YmQzNDBiN2JhOTk0/YjZjMDRlOWExZTg3/OWE5Mi5qcGc.jpg">Joff Thyer</podcast:person>
    </item>
  </channel>
</rss>
