<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/stylesheet.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link rel="self" type="application/rss+xml" href="https://feeds.transistor.fm/certified-the-isaca-aaia-audio-course" title="MP3 Audio"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    <podcast:podping usesPodping="true"/>
    <title>Certified: The ISACA AAIA Audio Course</title>
    <generator>Transistor (https://transistor.fm)</generator>
    <itunes:new-feed-url>https://feeds.transistor.fm/certified-the-isaca-aaia-audio-course</itunes:new-feed-url>
    <description>Welcome to Certified: The ISACA AAIA Audio Course. I’m your guide for this series, and my job is to make AI auditing feel clear, structured, and doable for people who already have a full plate. Across these episodes, you’ll build a practical mental model for how AI systems work in an organization and how an auditor or assurance professional should evaluate them. Expect plain language, a steady pace, and a focus on what you can actually test, document, and defend. We’ll spend time on governance, data, models, controls, and monitoring, but we’ll always bring it back to audit outcomes: scope, criteria, evidence, findings, and reporting that leaders can act on.

Here’s how to use Certified: The ISACA AAIA Audio Course. Start at the beginning, even if you’re experienced, because the early episodes set shared definitions and a consistent way to think about evidence. Listen once for understanding, then listen again when you’re ready to turn concepts into checklists you can use in the real world. If a term is new, don’t pause to research it mid-episode—keep going and let repetition do its job, because we’ll reinforce the same ideas from multiple angles. If this course is helping you, follow the show so new episodes land automatically. Subscribe wherever you get podcasts.</description>
    <copyright>© 2026 Jason Edwards</copyright>
    <podcast:guid>202ca6a1-6ecd-53ac-8a12-21741b75deec</podcast:guid>
    <podcast:podroll>
      <podcast:remoteItem feedGuid="9af25f2f-f465-5c56-8635-fc5e831ff06a" feedUrl="https://feeds.transistor.fm/bare-metal-cyber-a725a484-8216-4f80-9a32-2bfd5efcc240"/>
      <podcast:remoteItem feedGuid="1e81ed4d-b3a7-5035-b12a-5171bdd497b8" feedUrl="https://feeds.transistor.fm/certified-the-crisc-prepcast"/>
      <podcast:remoteItem feedGuid="0e52dc8b-9c94-58c7-b2fc-3041b8d8ca89" feedUrl="https://feeds.transistor.fm/certified-the-isaca-cdpse-audio-course"/>
      <podcast:remoteItem feedGuid="ac645ca7-7469-50bf-9010-f13c165e3e14" feedUrl="https://feeds.transistor.fm/baremetalcyber-dot-one"/>
      <podcast:remoteItem feedGuid="a4bd6f73-58ad-5c6b-8f9f-d58c53205adb" feedUrl="https://feeds.transistor.fm/certified-the-isaca-aaism-audio-course"/>
      <podcast:remoteItem feedGuid="91e17d1e-346e-5831-a7ea-e8f0f42e3d60" feedUrl="https://feeds.transistor.fm/certified-responsible-ai-audio-course"/>
      <podcast:remoteItem feedGuid="12ba6b47-50a9-5caa-aebe-16bae40dbbc5" feedUrl="https://feeds.transistor.fm/cism"/>
      <podcast:remoteItem feedGuid="b0bba863-f5ac-53e3-ad5d-30089ff50edc" feedUrl="https://feeds.transistor.fm/certified-the-isaca-aair-audio-course"/>
      <podcast:remoteItem feedGuid="c7e56267-6dbf-5333-928b-b43d99cf0aa8" feedUrl="https://feeds.transistor.fm/certified-ai-security"/>
      <podcast:remoteItem feedGuid="c424cfac-04e8-5c02-8ac7-4df13280735d" feedUrl="https://feeds.transistor.fm/certified-the-isaca-cisa-prepcast"/>
    </podcast:podroll>
    <podcast:locked>yes</podcast:locked>
    <itunes:applepodcastsverify>a8614fa0-0d15-11f1-a368-e94cb43fd9b7</itunes:applepodcastsverify>
    <language>en</language>
    <pubDate>Tue, 17 Mar 2026 15:32:41 -0500</pubDate>
    <lastBuildDate>Sat, 04 Apr 2026 00:07:16 -0500</lastBuildDate>
    
    <itunes:category text="Technology"/>
    <itunes:category text="Education">
      <itunes:category text="Courses"/>
    </itunes:category>
    <itunes:type>serial</itunes:type>
    <itunes:author>Jason Edwards</itunes:author>
    <itunes:image href="https://img.transistorcdn.com/Lgj-C4Mxiz0Mu2gdmoHfQ3mALFTBLrpAliSMzHkK0-A/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yNWJl/Y2NiNmM1ZDgwZjlj/NDAwNDc3YTQzODNk/YTBmMC5wbmc.jpg"/>
    <itunes:summary>Welcome to Certified: The ISACA AAIA Audio Course. I’m your guide for this series, and my job is to make AI auditing feel clear, structured, and doable for people who already have a full plate. Across these episodes, you’ll build a practical mental model for how AI systems work in an organization and how an auditor or assurance professional should evaluate them. Expect plain language, a steady pace, and a focus on what you can actually test, document, and defend. We’ll spend time on governance, data, models, controls, and monitoring, but we’ll always bring it back to audit outcomes: scope, criteria, evidence, findings, and reporting that leaders can act on.

Here’s how to use Certified: The ISACA AAIA Audio Course. Start at the beginning, even if you’re experienced, because the early episodes set shared definitions and a consistent way to think about evidence. Listen once for understanding, then listen again when you’re ready to turn concepts into checklists you can use in the real world. If a term is new, don’t pause to research it mid-episode—keep going and let repetition do its job, because we’ll reinforce the same ideas from multiple angles. If this course is helping you, follow the show so new episodes land automatically. Subscribe wherever you get podcasts.</itunes:summary>
    <itunes:subtitle>Welcome to Certified: The ISACA AAIA Audio Course.</itunes:subtitle>
    <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
    <itunes:owner>
      <itunes:name>Jason Edwards</itunes:name>
      <itunes:email>baremetalcyber@outlook.com</itunes:email>
    </itunes:owner>
    <itunes:complete>No</itunes:complete>
    <itunes:explicit>No</itunes:explicit>
    <item>
      <title>Episode 1 — Exam orientation and a spoken 30-day plan to pass AAISM (Tasks 1–22)</title>
      <itunes:episode>1</itunes:episode>
      <podcast:episode>1</podcast:episode>
      <itunes:title>Episode 1 — Exam orientation and a spoken 30-day plan to pass AAISM (Tasks 1–22)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e3b5582a-88fa-41a5-977c-fc841ccda8a4</guid>
      <link>https://share.transistor.fm/s/3af295ce</link>
      <description>
        <![CDATA[<p>This episode sets your baseline for what AAISM tests and how to use a simple spoken routine to cover Tasks 1–22 in 30 days without guessing what matters, focusing on how ISACA-style items reward clear governance, risk thinking, and control ownership over tool trivia. You’ll translate the task list into a weekly cadence that rotates governance foundations, risk workflow decisions, and technical control expectations, while using short daily recall drills to build speed on definitions, “best answer” logic, and common distractors. We’ll walk through how to turn each task into a repeatable audio checklist—what you must be able to define, what you must be able to choose, and what you must be able to justify—then stress-test the plan with realistic constraints like limited study time, mixed AI terminology, and policy-versus-implementation confusion that appears on exam day. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode sets your baseline for what AAISM tests and how to use a simple spoken routine to cover Tasks 1–22 in 30 days without guessing what matters, focusing on how ISACA-style items reward clear governance, risk thinking, and control ownership over tool trivia. You’ll translate the task list into a weekly cadence that rotates governance foundations, risk workflow decisions, and technical control expectations, while using short daily recall drills to build speed on definitions, “best answer” logic, and common distractors. We’ll walk through how to turn each task into a repeatable audio checklist—what you must be able to define, what you must be able to choose, and what you must be able to justify—then stress-test the plan with realistic constraints like limited study time, mixed AI terminology, and policy-versus-implementation confusion that appears on exam day. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:31:26 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/3af295ce/a0f43f40.mp3" length="37173078" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>928</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode sets your baseline for what AAISM tests and how to use a simple spoken routine to cover Tasks 1–22 in 30 days without guessing what matters, focusing on how ISACA-style items reward clear governance, risk thinking, and control ownership over tool trivia. You’ll translate the task list into a weekly cadence that rotates governance foundations, risk workflow decisions, and technical control expectations, while using short daily recall drills to build speed on definitions, “best answer” logic, and common distractors. We’ll walk through how to turn each task into a repeatable audio checklist—what you must be able to define, what you must be able to choose, and what you must be able to justify—then stress-test the plan with realistic constraints like limited study time, mixed AI terminology, and policy-versus-implementation confusion that appears on exam day. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/3af295ce/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 2 — Understand how AAISM questions map to real AI security work (Tasks 1–22)</title>
      <itunes:episode>2</itunes:episode>
      <podcast:episode>2</podcast:episode>
      <itunes:title>Episode 2 — Understand how AAISM questions map to real AI security work (Tasks 1–22)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">87a81468-c40b-4865-b1c6-54c9a32e16e8</guid>
      <link>https://share.transistor.fm/s/4936df62</link>
      <description>
        <![CDATA[<p>This episode explains how AAISM questions mirror real AI security work by testing whether you can connect governance decisions, risk assessments, and control evidence to a specific AI use case, rather than treating AI as a separate “special” security universe. You’ll learn to spot the exam’s recurring pattern: identify the AI asset and lifecycle phase, determine the accountable role, select the right task-driven action, and choose the option that produces defensible evidence for audit, contracts, or regulators. We’ll use scenarios like a new vendor LLM feature, a model update that changes outputs, and an incident involving prompt leakage to practice mapping each situation to the most relevant tasks, including impact assessment, inventory discipline, monitoring, and response. The goal is to make every question feel like a familiar operational decision you’ve already rehearsed in plain language. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains how AAISM questions mirror real AI security work by testing whether you can connect governance decisions, risk assessments, and control evidence to a specific AI use case, rather than treating AI as a separate “special” security universe. You’ll learn to spot the exam’s recurring pattern: identify the AI asset and lifecycle phase, determine the accountable role, select the right task-driven action, and choose the option that produces defensible evidence for audit, contracts, or regulators. We’ll use scenarios like a new vendor LLM feature, a model update that changes outputs, and an incident involving prompt leakage to practice mapping each situation to the most relevant tasks, including impact assessment, inventory discipline, monitoring, and response. The goal is to make every question feel like a familiar operational decision you’ve already rehearsed in plain language. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:31:46 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/4936df62/849ede1d.mp3" length="33231731" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>830</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains how AAISM questions mirror real AI security work by testing whether you can connect governance decisions, risk assessments, and control evidence to a specific AI use case, rather than treating AI as a separate “special” security universe. You’ll learn to spot the exam’s recurring pattern: identify the AI asset and lifecycle phase, determine the accountable role, select the right task-driven action, and choose the option that produces defensible evidence for audit, contracts, or regulators. We’ll use scenarios like a new vendor LLM feature, a model update that changes outputs, and an incident involving prompt leakage to practice mapping each situation to the most relevant tasks, including impact assessment, inventory discipline, monitoring, and response. The goal is to make every question feel like a familiar operational decision you’ve already rehearsed in plain language. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/4936df62/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 3 — Walk through an AI system life cycle in clear, simple language (Task 22)</title>
      <itunes:episode>3</itunes:episode>
      <podcast:episode>3</podcast:episode>
      <itunes:title>Episode 3 — Walk through an AI system life cycle in clear, simple language (Task 22)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7015080d-17b8-497f-9eb6-a66ed25f6141</guid>
      <link>https://share.transistor.fm/s/951a178e</link>
      <description>
        <![CDATA[<p>This episode builds a clean, exam-ready mental model of the AI system life cycle so you can consistently place risks, controls, and evidence in the right phase, which is central to Task 22 and frequently implied across other tasks. You’ll define each phase—from idea and intake through data collection, model development, training, evaluation, deployment, operations, change management, and retirement—and you’ll learn what “secure” means at each step in terms of access control, data integrity, safety validation, monitoring, and documented approvals. We’ll connect common failures to life-cycle blind spots, such as using production prompts in testing, retraining on untrusted data, or shipping a model change without updating impact assessments and runbooks. By the end, you’ll be able to hear a scenario and immediately say, “This is a lifecycle governance problem,” or “This is a pipeline control problem,” and pick the best next action. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode builds a clean, exam-ready mental model of the AI system life cycle so you can consistently place risks, controls, and evidence in the right phase, which is central to Task 22 and frequently implied across other tasks. You’ll define each phase—from idea and intake through data collection, model development, training, evaluation, deployment, operations, change management, and retirement—and you’ll learn what “secure” means at each step in terms of access control, data integrity, safety validation, monitoring, and documented approvals. We’ll connect common failures to life-cycle blind spots, such as using production prompts in testing, retraining on untrusted data, or shipping a model change without updating impact assessments and runbooks. By the end, you’ll be able to hear a scenario and immediately say, “This is a lifecycle governance problem,” or “This is a pipeline control problem,” and pick the best next action. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:31:59 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/951a178e/623899ea.mp3" length="33000808" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>824</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode builds a clean, exam-ready mental model of the AI system life cycle so you can consistently place risks, controls, and evidence in the right phase, which is central to Task 22 and frequently implied across other tasks. You’ll define each phase—from idea and intake through data collection, model development, training, evaluation, deployment, operations, change management, and retirement—and you’ll learn what “secure” means at each step in terms of access control, data integrity, safety validation, monitoring, and documented approvals. We’ll connect common failures to life-cycle blind spots, such as using production prompts in testing, retraining on untrusted data, or shipping a model change without updating impact assessments and runbooks. By the end, you’ll be able to hear a scenario and immediately say, “This is a lifecycle governance problem,” or “This is a pipeline control problem,” and pick the best next action. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/951a178e/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 4 — Exam Acronyms: High-Yield Audio Reference for AAISM daily practice (Tasks 1–22)</title>
      <itunes:episode>4</itunes:episode>
      <podcast:episode>4</podcast:episode>
      <itunes:title>Episode 4 — Exam Acronyms: High-Yield Audio Reference for AAISM daily practice (Tasks 1–22)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b97ed8a3-7075-4771-a113-9823395ff5c6</guid>
      <link>https://share.transistor.fm/s/23c757a6</link>
      <description>
        <![CDATA[<p>This episode delivers a practical acronym and terminology alignment session designed for daily recall, because AAISM questions often hinge on whether you interpret governance, risk, and assurance language the way ISACA intends. You’ll clarify how common security acronyms and AI terms behave in exam contexts, including where they signal ownership, evidence, monitoring, or lifecycle controls, and where they distract you into choosing tool-centric answers that lack accountability. We’ll reinforce fast recognition of governance artifacts like charters, policies, standards, and procedures, and risk artifacts like assessments, treatment decisions, KRIs, and exception handling, then connect them to AI-specific assets such as models, prompts, embeddings, and inference logs. You’ll also practice translating acronym-heavy options into plain language so you can reliably select the most complete and defensible response under time pressure. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode delivers a practical acronym and terminology alignment session designed for daily recall, because AAISM questions often hinge on whether you interpret governance, risk, and assurance language the way ISACA intends. You’ll clarify how common security acronyms and AI terms behave in exam contexts, including where they signal ownership, evidence, monitoring, or lifecycle controls, and where they distract you into choosing tool-centric answers that lack accountability. We’ll reinforce fast recognition of governance artifacts like charters, policies, standards, and procedures, and risk artifacts like assessments, treatment decisions, KRIs, and exception handling, then connect them to AI-specific assets such as models, prompts, embeddings, and inference logs. You’ll also practice translating acronym-heavy options into plain language so you can reliably select the most complete and defensible response under time pressure. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:32:10 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/23c757a6/643823f2.mp3" length="38255614" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>956</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode delivers a practical acronym and terminology alignment session designed for daily recall, because AAISM questions often hinge on whether you interpret governance, risk, and assurance language the way ISACA intends. You’ll clarify how common security acronyms and AI terms behave in exam contexts, including where they signal ownership, evidence, monitoring, or lifecycle controls, and where they distract you into choosing tool-centric answers that lack accountability. We’ll reinforce fast recognition of governance artifacts like charters, policies, standards, and procedures, and risk artifacts like assessments, treatment decisions, KRIs, and exception handling, then connect them to AI-specific assets such as models, prompts, embeddings, and inference logs. You’ll also practice translating acronym-heavy options into plain language so you can reliably select the most complete and defensible response under time pressure. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/23c757a6/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 5 — Domain 1 overview: lead AI governance and program management confidently (Task 1)</title>
      <itunes:episode>5</itunes:episode>
      <podcast:episode>5</podcast:episode>
      <itunes:title>Episode 5 — Domain 1 overview: lead AI governance and program management confidently (Task 1)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">8f7c06da-feba-460c-8f85-50423850fea9</guid>
      <link>https://share.transistor.fm/s/b8e337c5</link>
      <description>
        <![CDATA[<p>This episode frames Domain 1 through the lens of Task 1, focusing on how to lead AI governance and program management so decisions are consistent, auditable, and aligned to business objectives instead of being improvised project by project. You’ll define what AI governance means in AAISM terms: clear authority, documented decision rights, repeatable oversight routines, and measurable outcomes tied to risk and compliance obligations. We’ll explore how a governance program differs from a single policy document by covering sponsorship, scope boundaries, operating cadence, escalation triggers, and how governance interfaces with enterprise security and risk management. Practical examples include standing review boards for model changes, approval gates for new AI use cases, and evidence capture that supports regulators, contracts, and internal audits. The key exam skill here is choosing actions that create durable control and accountability, not just quick fixes. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode frames Domain 1 through the lens of Task 1, focusing on how to lead AI governance and program management so decisions are consistent, auditable, and aligned to business objectives instead of being improvised project by project. You’ll define what AI governance means in AAISM terms: clear authority, documented decision rights, repeatable oversight routines, and measurable outcomes tied to risk and compliance obligations. We’ll explore how a governance program differs from a single policy document by covering sponsorship, scope boundaries, operating cadence, escalation triggers, and how governance interfaces with enterprise security and risk management. Practical examples include standing review boards for model changes, approval gates for new AI use cases, and evidence capture that supports regulators, contracts, and internal audits. The key exam skill here is choosing actions that create durable control and accountability, not just quick fixes. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:32:28 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/b8e337c5/f6e39bc1.mp3" length="35216010" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>880</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode frames Domain 1 through the lens of Task 1, focusing on how to lead AI governance and program management so decisions are consistent, auditable, and aligned to business objectives instead of being improvised project by project. You’ll define what AI governance means in AAISM terms: clear authority, documented decision rights, repeatable oversight routines, and measurable outcomes tied to risk and compliance obligations. We’ll explore how a governance program differs from a single policy document by covering sponsorship, scope boundaries, operating cadence, escalation triggers, and how governance interfaces with enterprise security and risk management. Practical examples include standing review boards for model changes, approval gates for new AI use cases, and evidence capture that supports regulators, contracts, and internal audits. The key exam skill here is choosing actions that create durable control and accountability, not just quick fixes. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b8e337c5/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 6 — Build an AI governance charter that aligns to business objectives (Task 1)</title>
      <itunes:episode>6</itunes:episode>
      <podcast:episode>6</podcast:episode>
      <itunes:title>Episode 6 — Build an AI governance charter that aligns to business objectives (Task 1)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">14c7b19d-7687-4dbc-bc08-dde541a201f8</guid>
      <link>https://share.transistor.fm/s/86775379</link>
      <description>
        <![CDATA[<p>This episode teaches how to build an AI governance charter that exam questions treat as the “source of truth” for scope, authority, and priorities, which is why it anchors Task 1 and influences many best-answer choices. You’ll break down charter essentials: purpose, scope of AI systems covered, decision rights, stakeholder roles, risk tolerance alignment, oversight cadence, and required outputs such as policies, inventories, and reporting. We’ll discuss how to translate business objectives—like revenue growth, customer support efficiency, or fraud reduction—into governance requirements that constrain risk, define acceptable use, and set expectations for evidence. You’ll also practice troubleshooting weak charters that are vague, overbroad, or disconnected from enterprise governance, and you’ll learn how to defend the charter by tying it to measurable outcomes and compliance commitments. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches how to build an AI governance charter that exam questions treat as the “source of truth” for scope, authority, and priorities, which is why it anchors Task 1 and influences many best-answer choices. You’ll break down charter essentials: purpose, scope of AI systems covered, decision rights, stakeholder roles, risk tolerance alignment, oversight cadence, and required outputs such as policies, inventories, and reporting. We’ll discuss how to translate business objectives—like revenue growth, customer support efficiency, or fraud reduction—into governance requirements that constrain risk, define acceptable use, and set expectations for evidence. You’ll also practice troubleshooting weak charters that are vague, overbroad, or disconnected from enterprise governance, and you’ll learn how to defend the charter by tying it to measurable outcomes and compliance commitments. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:32:45 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/86775379/8cdb75c1.mp3" length="34104224" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>852</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches how to build an AI governance charter that exam questions treat as the “source of truth” for scope, authority, and priorities, which is why it anchors Task 1 and influences many best-answer choices. You’ll break down charter essentials: purpose, scope of AI systems covered, decision rights, stakeholder roles, risk tolerance alignment, oversight cadence, and required outputs such as policies, inventories, and reporting. We’ll discuss how to translate business objectives—like revenue growth, customer support efficiency, or fraud reduction—into governance requirements that constrain risk, define acceptable use, and set expectations for evidence. You’ll also practice troubleshooting weak charters that are vague, overbroad, or disconnected from enterprise governance, and you’ll learn how to defend the charter by tying it to measurable outcomes and compliance commitments. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/86775379/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 7 — Define AI roles and responsibilities so decisions are owned and clear (Task 1)</title>
      <itunes:episode>7</itunes:episode>
      <podcast:episode>7</podcast:episode>
      <itunes:title>Episode 7 — Define AI roles and responsibilities so decisions are owned and clear (Task 1)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">245f58c0-b561-4eb0-8d6b-e19820a8ef16</guid>
      <link>https://share.transistor.fm/s/fa227ed3</link>
      <description>
        <![CDATA[<p>This episode focuses on clarifying AI roles and responsibilities so accountability is explicit, which is a frequent AAISM decision point because strong governance requires named owners for risk acceptance, control operation, and evidence production. You’ll define typical role categories—business owner, model owner, data owner, security, privacy, risk, legal, and operations—and learn how to express decision rights so “who approves what” is not implied or assumed. We’ll work through scenarios like deploying a new model feature, responding to a vendor security update, and handling user access to sensitive prompts, highlighting how unclear ownership leads to gaps in monitoring, delayed incident response, and weak audit posture. On the exam, the best answer often establishes or uses the correct ownership pathway before implementing technical changes, and this episode trains you to recognize that pattern quickly. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on clarifying AI roles and responsibilities so accountability is explicit, which is a frequent AAISM decision point because strong governance requires named owners for risk acceptance, control operation, and evidence production. You’ll define typical role categories—business owner, model owner, data owner, security, privacy, risk, legal, and operations—and learn how to express decision rights so “who approves what” is not implied or assumed. We’ll work through scenarios like deploying a new model feature, responding to a vendor security update, and handling user access to sensitive prompts, highlighting how unclear ownership leads to gaps in monitoring, delayed incident response, and weak audit posture. On the exam, the best answer often establishes or uses the correct ownership pathway before implementing technical changes, and this episode trains you to recognize that pattern quickly. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:33:00 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/fa227ed3/936c2e75.mp3" length="34346649" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>858</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on clarifying AI roles and responsibilities so accountability is explicit, which is a frequent AAISM decision point because strong governance requires named owners for risk acceptance, control operation, and evidence production. You’ll define typical role categories—business owner, model owner, data owner, security, privacy, risk, legal, and operations—and learn how to express decision rights so “who approves what” is not implied or assumed. We’ll work through scenarios like deploying a new model feature, responding to a vendor security update, and handling user access to sensitive prompts, highlighting how unclear ownership leads to gaps in monitoring, delayed incident response, and weak audit posture. On the exam, the best answer often establishes or uses the correct ownership pathway before implementing technical changes, and this episode trains you to recognize that pattern quickly. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/fa227ed3/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 8 — Set governance routines that keep AI security decisions consistent (Task 1)</title>
      <itunes:episode>8</itunes:episode>
      <podcast:episode>8</podcast:episode>
      <itunes:title>Episode 8 — Set governance routines that keep AI security decisions consistent (Task 1)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6947d65d-420d-4608-b547-77b8e51167d0</guid>
      <link>https://share.transistor.fm/s/f8ddafe3</link>
      <description>
        <![CDATA[<p>This episode explains how governance routines turn intent into repeatable action, which matters for Task 1 because AAISM expects you to sustain AI security decisions over time, not just design them once. You’ll build a practical cadence of reviews and checkpoints for AI intake, impact assessment timing, inventory updates, control health, incident learnings, and vendor status, and you’ll learn how to structure meetings and artifacts so they produce evidence instead of opinions. We’ll cover best practices like defining triggers for out-of-cycle reviews, using standardized decision templates for approvals and exceptions, and ensuring outcomes feed metrics and risk reporting. Troubleshooting focuses on common failures: routines that are too frequent to maintain, too vague to be auditable, or disconnected from change management, causing drift in model behavior and untracked exposure. You’ll leave with a governance rhythm that maps cleanly to exam scenarios. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains how governance routines turn intent into repeatable action, which matters for Task 1 because AAISM expects you to sustain AI security decisions over time, not just design them once. You’ll build a practical cadence of reviews and checkpoints for AI intake, impact assessment timing, inventory updates, control health, incident learnings, and vendor status, and you’ll learn how to structure meetings and artifacts so they produce evidence instead of opinions. We’ll cover best practices like defining triggers for out-of-cycle reviews, using standardized decision templates for approvals and exceptions, and ensuring outcomes feed metrics and risk reporting. Troubleshooting focuses on common failures: routines that are too frequent to maintain, too vague to be auditable, or disconnected from change management, causing drift in model behavior and untracked exposure. You’ll leave with a governance rhythm that maps cleanly to exam scenarios. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:33:12 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/f8ddafe3/8de0dc72.mp3" length="35564994" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>888</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains how governance routines turn intent into repeatable action, which matters for Task 1 because AAISM expects you to sustain AI security decisions over time, not just design them once. You’ll build a practical cadence of reviews and checkpoints for AI intake, impact assessment timing, inventory updates, control health, incident learnings, and vendor status, and you’ll learn how to structure meetings and artifacts so they produce evidence instead of opinions. We’ll cover best practices like defining triggers for out-of-cycle reviews, using standardized decision templates for approvals and exceptions, and ensuring outcomes feed metrics and risk reporting. Troubleshooting focuses on common failures: routines that are too frequent to maintain, too vague to be auditable, or disconnected from change management, causing drift in model behavior and untracked exposure. You’ll leave with a governance rhythm that maps cleanly to exam scenarios. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/f8ddafe3/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 9 — Use industry frameworks to organize AI governance and security work (Task 3)</title>
      <itunes:episode>9</itunes:episode>
      <podcast:episode>9</podcast:episode>
      <itunes:title>Episode 9 — Use industry frameworks to organize AI governance and security work (Task 3)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ac3b070d-e32a-4264-9608-fb0ff26826e3</guid>
      <link>https://share.transistor.fm/s/eb6f82d9</link>
      <description>
        <![CDATA[<p>This episode covers how to use industry frameworks to organize AI governance and security work, emphasizing Task 3’s focus on translating external expectations—ethics, privacy, and regulatory pressures—into structured, testable requirements. You’ll learn how frameworks function on the exam: they provide a shared vocabulary, coverage map, and evidence checklist, helping you avoid ad hoc control selection and making your program defensible during audits or contracts. We’ll discuss how to choose an appropriate framework lens based on your AI use case and risk profile, and how to reconcile framework guidance with enterprise security standards so AI does not become a parallel governance track. Examples include using framework categories to drive impact assessment questions, control selection for data handling and monitoring, and documentation practices that prove conformity. The exam-relevant skill is demonstrating structured alignment, not memorizing framework names. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode covers how to use industry frameworks to organize AI governance and security work, emphasizing Task 3’s focus on translating external expectations—ethics, privacy, and regulatory pressures—into structured, testable requirements. You’ll learn how frameworks function on the exam: they provide a shared vocabulary, coverage map, and evidence checklist, helping you avoid ad hoc control selection and making your program defensible during audits or contracts. We’ll discuss how to choose an appropriate framework lens based on your AI use case and risk profile, and how to reconcile framework guidance with enterprise security standards so AI does not become a parallel governance track. Examples include using framework categories to drive impact assessment questions, control selection for data handling and monitoring, and documentation practices that prove conformity. The exam-relevant skill is demonstrating structured alignment, not memorizing framework names. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:33:36 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/eb6f82d9/fdeb2217.mp3" length="35576490" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>889</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode covers how to use industry frameworks to organize AI governance and security work, emphasizing Task 3’s focus on translating external expectations—ethics, privacy, and regulatory pressures—into structured, testable requirements. You’ll learn how frameworks function on the exam: they provide a shared vocabulary, coverage map, and evidence checklist, helping you avoid ad hoc control selection and making your program defensible during audits or contracts. We’ll discuss how to choose an appropriate framework lens based on your AI use case and risk profile, and how to reconcile framework guidance with enterprise security standards so AI does not become a parallel governance track. Examples include using framework categories to drive impact assessment questions, control selection for data handling and monitoring, and documentation practices that prove conformity. The exam-relevant skill is demonstrating structured alignment, not memorizing framework names. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/eb6f82d9/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 10 — Apply ethical principles when AI outcomes create real business risk (Task 3)</title>
      <itunes:episode>10</itunes:episode>
      <podcast:episode>10</podcast:episode>
      <itunes:title>Episode 10 — Apply ethical principles when AI outcomes create real business risk (Task 3)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">009f2915-ea01-445a-909d-b5304649ff0e</guid>
      <link>https://share.transistor.fm/s/48fb9dda</link>
      <description>
        <![CDATA[<p>This episode teaches how to apply ethical principles in a way that reduces real business risk, aligning with Task 3 and showing up in questions where the correct answer prioritizes harm reduction, transparency, accountability, and privacy alongside security controls. You’ll define core ethical concerns—bias, unfair outcomes, unsafe automation, misuse, and overcollection—and connect them to governance actions like setting acceptable use boundaries, requiring human oversight for high-impact decisions, and validating models against safety and failure modes. We’ll work through scenarios such as an AI screening tool that produces inconsistent outcomes, a customer chatbot that leaks sensitive data, and an internal assistant that confidently generates incorrect instructions, explaining what “ethical” means operationally: measurable requirements, monitoring signals, escalation triggers, and documentation that stands up to scrutiny. You’ll learn to choose exam answers that treat ethics as controllable risk, not as abstract values. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches how to apply ethical principles in a way that reduces real business risk, aligning with Task 3 and showing up in questions where the correct answer prioritizes harm reduction, transparency, accountability, and privacy alongside security controls. You’ll define core ethical concerns—bias, unfair outcomes, unsafe automation, misuse, and overcollection—and connect them to governance actions like setting acceptable use boundaries, requiring human oversight for high-impact decisions, and validating models against safety and failure modes. We’ll work through scenarios such as an AI screening tool that produces inconsistent outcomes, a customer chatbot that leaks sensitive data, and an internal assistant that confidently generates incorrect instructions, explaining what “ethical” means operationally: measurable requirements, monitoring signals, escalation triggers, and documentation that stands up to scrutiny. You’ll learn to choose exam answers that treat ethics as controllable risk, not as abstract values. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:33:56 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/48fb9dda/a17c03e0.mp3" length="34312166" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>857</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches how to apply ethical principles in a way that reduces real business risk, aligning with Task 3 and showing up in questions where the correct answer prioritizes harm reduction, transparency, accountability, and privacy alongside security controls. You’ll define core ethical concerns—bias, unfair outcomes, unsafe automation, misuse, and overcollection—and connect them to governance actions like setting acceptable use boundaries, requiring human oversight for high-impact decisions, and validating models against safety and failure modes. We’ll work through scenarios such as an AI screening tool that produces inconsistent outcomes, a customer chatbot that leaks sensitive data, and an internal assistant that confidently generates incorrect instructions, explaining what “ethical” means operationally: measurable requirements, monitoring signals, escalation triggers, and documentation that stands up to scrutiny. You’ll learn to choose exam answers that treat ethics as controllable risk, not as abstract values. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/48fb9dda/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 11 — Translate AI regulations into practical, testable security requirements (Task 3)</title>
      <itunes:episode>11</itunes:episode>
      <podcast:episode>11</podcast:episode>
      <itunes:title>Episode 11 — Translate AI regulations into practical, testable security requirements (Task 3)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">59b3314c-3a93-4729-8e1f-a64b6b787002</guid>
      <link>https://share.transistor.fm/s/f5016e81</link>
      <description>
        <![CDATA[<p>This episode trains you to convert AI regulations and external obligations into concrete, testable security requirements, which is the core of Task 3 and a common “best answer” driver on AAISM when options compete between vague principles and measurable controls. You’ll learn how to read regulatory language for intent—such as transparency, accountability, privacy, safety, and documentation—then translate it into requirements you can assign to owners, validate with evidence, and monitor over time. We’ll work through scenarios like deploying a customer-facing chatbot, using third-party model hosting, and introducing automated decision support, showing how to define requirements for data handling, access control, logging, human oversight, and change management so you can prove conformity without slowing delivery. Troubleshooting focuses on frequent failures: treating compliance as a one-time checklist, ignoring lifecycle changes that invalidate approvals, and writing requirements that can’t be tested or audited. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode trains you to convert AI regulations and external obligations into concrete, testable security requirements, which is the core of Task 3 and a common “best answer” driver on AAISM when options compete between vague principles and measurable controls. You’ll learn how to read regulatory language for intent—such as transparency, accountability, privacy, safety, and documentation—then translate it into requirements you can assign to owners, validate with evidence, and monitor over time. We’ll work through scenarios like deploying a customer-facing chatbot, using third-party model hosting, and introducing automated decision support, showing how to define requirements for data handling, access control, logging, human oversight, and change management so you can prove conformity without slowing delivery. Troubleshooting focuses on frequent failures: treating compliance as a one-time checklist, ignoring lifecycle changes that invalidate approvals, and writing requirements that can’t be tested or audited. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:34:09 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/f5016e81/0844d4bb.mp3" length="37367456" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>933</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode trains you to convert AI regulations and external obligations into concrete, testable security requirements, which is the core of Task 3 and a common “best answer” driver on AAISM when options compete between vague principles and measurable controls. You’ll learn how to read regulatory language for intent—such as transparency, accountability, privacy, safety, and documentation—then translate it into requirements you can assign to owners, validate with evidence, and monitor over time. We’ll work through scenarios like deploying a customer-facing chatbot, using third-party model hosting, and introducing automated decision support, showing how to define requirements for data handling, access control, logging, human oversight, and change management so you can prove conformity without slowing delivery. Troubleshooting focuses on frequent failures: treating compliance as a one-time checklist, ignoring lifecycle changes that invalidate approvals, and writing requirements that can’t be tested or audited. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/f5016e81/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 12 — Plan AI impact assessments early so compliance is not an afterthought (Task 8)</title>
      <itunes:episode>12</itunes:episode>
      <podcast:episode>12</podcast:episode>
      <itunes:title>Episode 12 — Plan AI impact assessments early so compliance is not an afterthought (Task 8)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">79b9ecb1-e398-42ca-a0a3-d36df170e370</guid>
      <link>https://share.transistor.fm/s/21982fb8</link>
      <description>
        <![CDATA[<p>This episode explains why impact assessments must be planned early, not bolted on after deployment, and it targets Task 8 by showing how AAISM expects you to integrate assessment timing into intake, design, and governance checkpoints. You’ll define what an AI impact assessment is in practice: a structured evaluation of intended use, stakeholders affected, data sensitivity, legal and ethical concerns, and operational controls across the lifecycle, producing a record that can support leadership decisions and external scrutiny. We’ll use examples like adding a new data source to training, enabling retention of user prompts, and expanding an AI feature into a regulated market to illustrate when an early assessment prevents rework and reduces legal exposure. Exam-wise, you’ll practice recognizing “too late” signals—missing approvals, undocumented scope, or unclear owners—and choosing actions that re-establish governance before proceeding. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains why impact assessments must be planned early, not bolted on after deployment, and it targets Task 8 by showing how AAISM expects you to integrate assessment timing into intake, design, and governance checkpoints. You’ll define what an AI impact assessment is in practice: a structured evaluation of intended use, stakeholders affected, data sensitivity, legal and ethical concerns, and operational controls across the lifecycle, producing a record that can support leadership decisions and external scrutiny. We’ll use examples like adding a new data source to training, enabling retention of user prompts, and expanding an AI feature into a regulated market to illustrate when an early assessment prevents rework and reduces legal exposure. Exam-wise, you’ll practice recognizing “too late” signals—missing approvals, undocumented scope, or unclear owners—and choosing actions that re-establish governance before proceeding. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:34:24 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/21982fb8/5fde1167.mp3" length="40142701" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1003</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains why impact assessments must be planned early, not bolted on after deployment, and it targets Task 8 by showing how AAISM expects you to integrate assessment timing into intake, design, and governance checkpoints. You’ll define what an AI impact assessment is in practice: a structured evaluation of intended use, stakeholders affected, data sensitivity, legal and ethical concerns, and operational controls across the lifecycle, producing a record that can support leadership decisions and external scrutiny. We’ll use examples like adding a new data source to training, enabling retention of user prompts, and expanding an AI feature into a regulated market to illustrate when an early assessment prevents rework and reduces legal exposure. Exam-wise, you’ll practice recognizing “too late” signals—missing approvals, undocumented scope, or unclear owners—and choosing actions that re-establish governance before proceeding. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/21982fb8/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 13 — Perform AI impact assessments with scope, evidence, and actionable results (Task 8)</title>
      <itunes:episode>13</itunes:episode>
      <podcast:episode>13</podcast:episode>
      <itunes:title>Episode 13 — Perform AI impact assessments with scope, evidence, and actionable results (Task 8)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4e18e84d-8c9d-4130-996c-87ef3b19d7a4</guid>
      <link>https://share.transistor.fm/s/f9aa39a9</link>
      <description>
        <![CDATA[<p>This episode teaches you how to perform an AI impact assessment that produces actionable results rather than a generic narrative, aligning directly to Task 8 and preparing you for questions that test whether you can define scope, gather evidence, and recommend controls with clear ownership. You’ll learn to set boundaries first—what system, what users, what decisions, what data flows, and what lifecycle stage—then collect evidence such as data classifications, model behavior testing, access patterns, vendor commitments, and monitoring plans. We’ll walk through a practical scenario where a business wants to launch an AI assistant with access to internal knowledge bases, and you’ll identify the impact areas that matter most: confidentiality of prompts and outputs, integrity of generated guidance, availability dependencies, privacy obligations, and safety failure modes like hallucinations or harmful responses. You’ll also learn how to document findings in a way that drives decisions, including accept, mitigate, transfer, or stop. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to perform an AI impact assessment that produces actionable results rather than a generic narrative, aligning directly to Task 8 and preparing you for questions that test whether you can define scope, gather evidence, and recommend controls with clear ownership. You’ll learn to set boundaries first—what system, what users, what decisions, what data flows, and what lifecycle stage—then collect evidence such as data classifications, model behavior testing, access patterns, vendor commitments, and monitoring plans. We’ll walk through a practical scenario where a business wants to launch an AI assistant with access to internal knowledge bases, and you’ll identify the impact areas that matter most: confidentiality of prompts and outputs, integrity of generated guidance, availability dependencies, privacy obligations, and safety failure modes like hallucinations or harmful responses. You’ll also learn how to document findings in a way that drives decisions, including accept, mitigate, transfer, or stop. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:34:37 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/f9aa39a9/b1f05978.mp3" length="38768670" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>968</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to perform an AI impact assessment that produces actionable results rather than a generic narrative, aligning directly to Task 8 and preparing you for questions that test whether you can define scope, gather evidence, and recommend controls with clear ownership. You’ll learn to set boundaries first—what system, what users, what decisions, what data flows, and what lifecycle stage—then collect evidence such as data classifications, model behavior testing, access patterns, vendor commitments, and monitoring plans. We’ll walk through a practical scenario where a business wants to launch an AI assistant with access to internal knowledge bases, and you’ll identify the impact areas that matter most: confidentiality of prompts and outputs, integrity of generated guidance, availability dependencies, privacy obligations, and safety failure modes like hallucinations or harmful responses. You’ll also learn how to document findings in a way that drives decisions, including accept, mitigate, transfer, or stop. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/f9aa39a9/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 14 — Prove conformity by building defensible evidence for regulators and contracts (Task 8)</title>
      <itunes:episode>14</itunes:episode>
      <podcast:episode>14</podcast:episode>
      <itunes:title>Episode 14 — Prove conformity by building defensible evidence for regulators and contracts (Task 8)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b0a4eed9-e03d-4eab-8877-9c19a7f36904</guid>
      <link>https://share.transistor.fm/s/7560d550</link>
      <description>
        <![CDATA[<p>This episode focuses on how to prove conformity by building defensible evidence, which is central to Task 8 and shows up across the exam whenever the correct choice emphasizes documentation, traceability, and repeatability over informal assurances. You’ll define what “defensible evidence” looks like for AI security: records of approvals, scoped assessments, control ownership, monitoring outputs, incident handling artifacts, and change history that links model updates to re-evaluated risk. We’ll use examples such as responding to a customer questionnaire, supporting a vendor audit, and meeting internal assurance expectations to show how evidence must be consistent, current, and tied to specific systems and requirements. You’ll also practice troubleshooting weak evidence patterns—screenshots without context, policies without procedures, controls without owners, and assessments that don’t reflect current deployments—so you can choose exam answers that strengthen audit readiness while keeping delivery practical. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on how to prove conformity by building defensible evidence, which is central to Task 8 and shows up across the exam whenever the correct choice emphasizes documentation, traceability, and repeatability over informal assurances. You’ll define what “defensible evidence” looks like for AI security: records of approvals, scoped assessments, control ownership, monitoring outputs, incident handling artifacts, and change history that links model updates to re-evaluated risk. We’ll use examples such as responding to a customer questionnaire, supporting a vendor audit, and meeting internal assurance expectations to show how evidence must be consistent, current, and tied to specific systems and requirements. You’ll also practice troubleshooting weak evidence patterns—screenshots without context, policies without procedures, controls without owners, and assessments that don’t reflect current deployments—so you can choose exam answers that strengthen audit readiness while keeping delivery practical. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:34:52 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/7560d550/32c23492.mp3" length="32610047" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>814</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on how to prove conformity by building defensible evidence, which is central to Task 8 and shows up across the exam whenever the correct choice emphasizes documentation, traceability, and repeatability over informal assurances. You’ll define what “defensible evidence” looks like for AI security: records of approvals, scoped assessments, control ownership, monitoring outputs, incident handling artifacts, and change history that links model updates to re-evaluated risk. We’ll use examples such as responding to a customer questionnaire, supporting a vendor audit, and meeting internal assurance expectations to show how evidence must be consistent, current, and tied to specific systems and requirements. You’ll also practice troubleshooting weak evidence patterns—screenshots without context, policies without procedures, controls without owners, and assessments that don’t reflect current deployments—so you can choose exam answers that strengthen audit readiness while keeping delivery practical. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/7560d550/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 15 — Write AI security policies people can follow without guessing (Task 2)</title>
      <itunes:episode>15</itunes:episode>
      <podcast:episode>15</podcast:episode>
      <itunes:title>Episode 15 — Write AI security policies people can follow without guessing (Task 2)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">13b0d299-d4d5-48b0-b152-05c8bc5dc89f</guid>
      <link>https://share.transistor.fm/s/8f8d7335</link>
      <description>
        <![CDATA[<p>This episode teaches how to write AI security policies that are usable in daily work, aligning to Task 2 and preparing you for AAISM questions where the “best” option is the one that reduces ambiguity, assigns responsibility, and can be enforced and audited. You’ll learn the difference between policy intent and operational direction, and how to write policy statements that clearly define scope, required behaviors, prohibited behaviors, and escalation paths for exceptions. We’ll use scenarios like allowing internal use of public generative AI tools, permitting model fine-tuning on company data, and integrating AI outputs into customer communications to show how policy language must address data handling, access control, logging, human oversight, and content safety. Troubleshooting focuses on policy failures that exams love to expose: vague “should” language, missing definitions, conflicts with enterprise standards, and policies that ignore the AI lifecycle and change control. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches how to write AI security policies that are usable in daily work, aligning to Task 2 and preparing you for AAISM questions where the “best” option is the one that reduces ambiguity, assigns responsibility, and can be enforced and audited. You’ll learn the difference between policy intent and operational direction, and how to write policy statements that clearly define scope, required behaviors, prohibited behaviors, and escalation paths for exceptions. We’ll use scenarios like allowing internal use of public generative AI tools, permitting model fine-tuning on company data, and integrating AI outputs into customer communications to show how policy language must address data handling, access control, logging, human oversight, and content safety. Troubleshooting focuses on policy failures that exams love to expose: vague “should” language, missing definitions, conflicts with enterprise standards, and policies that ignore the AI lifecycle and change control. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:35:14 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/8f8d7335/c191fdce.mp3" length="32168023" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>803</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches how to write AI security policies that are usable in daily work, aligning to Task 2 and preparing you for AAISM questions where the “best” option is the one that reduces ambiguity, assigns responsibility, and can be enforced and audited. You’ll learn the difference between policy intent and operational direction, and how to write policy statements that clearly define scope, required behaviors, prohibited behaviors, and escalation paths for exceptions. We’ll use scenarios like allowing internal use of public generative AI tools, permitting model fine-tuning on company data, and integrating AI outputs into customer communications to show how policy language must address data handling, access control, logging, human oversight, and content safety. Troubleshooting focuses on policy failures that exams love to expose: vague “should” language, missing definitions, conflicts with enterprise standards, and policies that ignore the AI lifecycle and change control. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/8f8d7335/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 16 — Turn policies into standards, guidelines, and step-by-step procedures (Task 2)</title>
      <itunes:episode>16</itunes:episode>
      <podcast:episode>16</podcast:episode>
      <itunes:title>Episode 16 — Turn policies into standards, guidelines, and step-by-step procedures (Task 2)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a9e5d664-a951-48c5-833f-05cf52d93180</guid>
      <link>https://share.transistor.fm/s/7950e220</link>
      <description>
        <![CDATA[<p>This episode explains how to translate policy into standards, guidelines, and procedures, which is a key Task 2 competency because AAISM expects you to operationalize governance into repeatable actions that produce consistent evidence. You’ll define how each artifact functions: policies set mandatory intent, standards specify measurable requirements, guidelines provide recommended options, and procedures describe the exact steps teams follow, including approvals and documentation. We’ll work through an example of an AI system that uses sensitive customer data, showing how a policy requirement becomes standards for encryption, access reviews, and logging, then becomes procedures for onboarding a dataset, provisioning model access, and validating a release. You’ll also learn how to troubleshoot organizations that stop at policy statements, creating gaps where teams interpret requirements differently and audits fail due to inconsistent execution. On the exam, this helps you select answers that mature intent into implementation. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains how to translate policy into standards, guidelines, and procedures, which is a key Task 2 competency because AAISM expects you to operationalize governance into repeatable actions that produce consistent evidence. You’ll define how each artifact functions: policies set mandatory intent, standards specify measurable requirements, guidelines provide recommended options, and procedures describe the exact steps teams follow, including approvals and documentation. We’ll work through an example of an AI system that uses sensitive customer data, showing how a policy requirement becomes standards for encryption, access reviews, and logging, then becomes procedures for onboarding a dataset, provisioning model access, and validating a release. You’ll also learn how to troubleshoot organizations that stop at policy statements, creating gaps where teams interpret requirements differently and audits fail due to inconsistent execution. On the exam, this helps you select answers that mature intent into implementation. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:35:27 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/7950e220/298c4283.mp3" length="37818848" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>945</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains how to translate policy into standards, guidelines, and procedures, which is a key Task 2 competency because AAISM expects you to operationalize governance into repeatable actions that produce consistent evidence. You’ll define how each artifact functions: policies set mandatory intent, standards specify measurable requirements, guidelines provide recommended options, and procedures describe the exact steps teams follow, including approvals and documentation. We’ll work through an example of an AI system that uses sensitive customer data, showing how a policy requirement becomes standards for encryption, access reviews, and logging, then becomes procedures for onboarding a dataset, provisioning model access, and validating a release. You’ll also learn how to troubleshoot organizations that stop at policy statements, creating gaps where teams interpret requirements differently and audits fail due to inconsistent execution. On the exam, this helps you select answers that mature intent into implementation. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/7950e220/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 17 — Keep AI security policies current using ownership and change control (Task 2)</title>
      <itunes:episode>17</itunes:episode>
      <podcast:episode>17</podcast:episode>
      <itunes:title>Episode 17 — Keep AI security policies current using ownership and change control (Task 2)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">17858b10-2e86-49d6-9d46-273416aa3fa1</guid>
      <link>https://share.transistor.fm/s/ee95fd70</link>
      <description>
        <![CDATA[<p>This episode targets the “policy drift” problem and shows how to keep AI security policies current through ownership and change control, which Task 2 treats as essential because AI systems evolve quickly and outdated guidance is functionally the same as no guidance. You’ll learn how to assign policy owners, define review triggers, and integrate updates into existing governance and enterprise change management so policy updates follow the same discipline as system changes. We’ll use practical examples like new AI capabilities introduced by a vendor, regulatory changes affecting your use case, and an incident that reveals a gap in acceptable use language, demonstrating how each event should trigger a review and a documented update process. Troubleshooting covers the common failure modes: policies reviewed on a calendar only, no mapping between policy and procedures, and “temporary” exceptions that become permanent without reassessment. The exam-relevant takeaway is choosing answers that establish durable control over time. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode targets the “policy drift” problem and shows how to keep AI security policies current through ownership and change control, which Task 2 treats as essential because AI systems evolve quickly and outdated guidance is functionally the same as no guidance. You’ll learn how to assign policy owners, define review triggers, and integrate updates into existing governance and enterprise change management so policy updates follow the same discipline as system changes. We’ll use practical examples like new AI capabilities introduced by a vendor, regulatory changes affecting your use case, and an incident that reveals a gap in acceptable use language, demonstrating how each event should trigger a review and a documented update process. Troubleshooting covers the common failure modes: policies reviewed on a calendar only, no mapping between policy and procedures, and “temporary” exceptions that become permanent without reassessment. The exam-relevant takeaway is choosing answers that establish durable control over time. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:35:43 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/ee95fd70/1347ad0a.mp3" length="39135417" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>978</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode targets the “policy drift” problem and shows how to keep AI security policies current through ownership and change control, which Task 2 treats as essential because AI systems evolve quickly and outdated guidance is functionally the same as no guidance. You’ll learn how to assign policy owners, define review triggers, and integrate updates into existing governance and enterprise change management so policy updates follow the same discipline as system changes. We’ll use practical examples like new AI capabilities introduced by a vendor, regulatory changes affecting your use case, and an incident that reveals a gap in acceptable use language, demonstrating how each event should trigger a review and a documented update process. Troubleshooting covers the common failure modes: policies reviewed on a calendar only, no mapping between policy and procedures, and “temporary” exceptions that become permanent without reassessment. The exam-relevant takeaway is choosing answers that establish durable control over time. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/ee95fd70/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 18 — Essential Terms: Plain-Language Glossary for fast, accurate recall (Tasks 1–22)</title>
      <itunes:episode>18</itunes:episode>
      <podcast:episode>18</podcast:episode>
      <itunes:title>Episode 18 — Essential Terms: Plain-Language Glossary for fast, accurate recall (Tasks 1–22)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e974e59d-195d-4519-bbe5-439140318289</guid>
      <link>https://share.transistor.fm/s/a3921a90</link>
      <description>
        <![CDATA[<p>This episode strengthens your exam performance by tightening your definitions in plain language, because AAISM frequently tests whether you can distinguish similar governance, risk, and AI security terms under time pressure across Tasks 1–22. You’ll reinforce high-confusion term pairs such as policy versus standard, risk identification versus risk assessment, monitoring versus testing, incident containment versus eradication, and inventory versus classification, then connect them to AI-specific concepts like prompts, inference logs, embeddings, drift, and model updates. We’ll practice turning dense option text into simple meaning so you can identify what a question is actually asking: who owns the decision, what evidence is required, and what lifecycle phase is affected. Real-world examples include defining what counts as an AI asset in inventory, what constitutes a control objective for monitoring, and how “conformity” differs from “best practice” in a regulated context. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode strengthens your exam performance by tightening your definitions in plain language, because AAISM frequently tests whether you can distinguish similar governance, risk, and AI security terms under time pressure across Tasks 1–22. You’ll reinforce high-confusion term pairs such as policy versus standard, risk identification versus risk assessment, monitoring versus testing, incident containment versus eradication, and inventory versus classification, then connect them to AI-specific concepts like prompts, inference logs, embeddings, drift, and model updates. We’ll practice turning dense option text into simple meaning so you can identify what a question is actually asking: who owns the decision, what evidence is required, and what lifecycle phase is affected. Real-world examples include defining what counts as an AI asset in inventory, what constitutes a control objective for monitoring, and how “conformity” differs from “best practice” in a regulated context. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:35:57 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/a3921a90/4c627931.mp3" length="36536760" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>913</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode strengthens your exam performance by tightening your definitions in plain language, because AAISM frequently tests whether you can distinguish similar governance, risk, and AI security terms under time pressure across Tasks 1–22. You’ll reinforce high-confusion term pairs such as policy versus standard, risk identification versus risk assessment, monitoring versus testing, incident containment versus eradication, and inventory versus classification, then connect them to AI-specific concepts like prompts, inference logs, embeddings, drift, and model updates. We’ll practice turning dense option text into simple meaning so you can identify what a question is actually asking: who owns the decision, what evidence is required, and what lifecycle phase is affected. Real-world examples include defining what counts as an AI asset in inventory, what constitutes a control objective for monitoring, and how “conformity” differs from “best practice” in a regulated context. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/a3921a90/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 19 — Create acceptable use guidelines that reduce risky AI behavior (Task 21)</title>
      <itunes:episode>19</itunes:episode>
      <podcast:episode>19</podcast:episode>
      <itunes:title>Episode 19 — Create acceptable use guidelines that reduce risky AI behavior (Task 21)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">635642e9-94e3-4fd8-8978-669d2bd97f1e</guid>
      <link>https://share.transistor.fm/s/9595fccd</link>
      <description>
        <![CDATA[<p>This episode focuses on Task 21 by showing how acceptable use guidelines reduce risky AI behavior in a way that is enforceable and measurable, which is exactly how AAISM frames human-driven risk as part of AI security management. You’ll define what acceptable use must address: what tools and systems are approved, what data is prohibited from input, how outputs may be used in decisions, and what oversight is required for high-impact contexts such as customer communications, hiring, finance, or safety. We’ll explore scenarios like employees pasting sensitive incident details into a public assistant, teams relying on unverified AI output for technical changes, and users attempting to bypass guardrails through prompt manipulation, then translate each scenario into guideline language and escalation paths. Troubleshooting covers how to avoid vague “be careful” rules by tying guidance to data classification, logging, access control, and disciplinary processes that align with governance expectations. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on Task 21 by showing how acceptable use guidelines reduce risky AI behavior in a way that is enforceable and measurable, which is exactly how AAISM frames human-driven risk as part of AI security management. You’ll define what acceptable use must address: what tools and systems are approved, what data is prohibited from input, how outputs may be used in decisions, and what oversight is required for high-impact contexts such as customer communications, hiring, finance, or safety. We’ll explore scenarios like employees pasting sensitive incident details into a public assistant, teams relying on unverified AI output for technical changes, and users attempting to bypass guardrails through prompt manipulation, then translate each scenario into guideline language and escalation paths. Troubleshooting covers how to avoid vague “be careful” rules by tying guidance to data classification, logging, access control, and disciplinary processes that align with governance expectations. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:36:13 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/9595fccd/4ef8db83.mp3" length="38400844" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>959</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on Task 21 by showing how acceptable use guidelines reduce risky AI behavior in a way that is enforceable and measurable, which is exactly how AAISM frames human-driven risk as part of AI security management. You’ll define what acceptable use must address: what tools and systems are approved, what data is prohibited from input, how outputs may be used in decisions, and what oversight is required for high-impact contexts such as customer communications, hiring, finance, or safety. We’ll explore scenarios like employees pasting sensitive incident details into a public assistant, teams relying on unverified AI output for technical changes, and users attempting to bypass guardrails through prompt manipulation, then translate each scenario into guideline language and escalation paths. Troubleshooting covers how to avoid vague “be careful” rules by tying guidance to data classification, logging, access control, and disciplinary processes that align with governance expectations. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/9595fccd/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 20 — Build AI security awareness training that sticks in daily work (Task 21)</title>
      <itunes:episode>20</itunes:episode>
      <podcast:episode>20</podcast:episode>
      <itunes:title>Episode 20 — Build AI security awareness training that sticks in daily work (Task 21)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c4c1660f-236c-48f2-961c-3a728c4ef6d7</guid>
      <link>https://share.transistor.fm/s/e9266c98</link>
      <description>
        <![CDATA[<p>This episode builds on Task 21 by teaching how to create AI security awareness training that changes daily behavior, because AAISM expects you to reduce human-driven exposure through repeatable learning, not one-time policy acknowledgements. You’ll define the training outcomes that matter for the exam and for real operations: recognizing sensitive data, using approved tools, validating outputs before acting, escalating suspected misuse, and understanding how prompts and outputs can become records that create legal and privacy exposure. We’ll use realistic workplace moments—drafting emails, summarizing meetings, generating code, and handling customer chats—to show how training should target the exact decisions people make, including how to avoid prompt leakage, how to handle confidential context, and how to spot unsafe automation. Troubleshooting focuses on measuring effectiveness with evidence, reinforcing training through governance routines, and ensuring role-based training matches risk and authority levels. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode builds on Task 21 by teaching how to create AI security awareness training that changes daily behavior, because AAISM expects you to reduce human-driven exposure through repeatable learning, not one-time policy acknowledgements. You’ll define the training outcomes that matter for the exam and for real operations: recognizing sensitive data, using approved tools, validating outputs before acting, escalating suspected misuse, and understanding how prompts and outputs can become records that create legal and privacy exposure. We’ll use realistic workplace moments—drafting emails, summarizing meetings, generating code, and handling customer chats—to show how training should target the exact decisions people make, including how to avoid prompt leakage, how to handle confidential context, and how to spot unsafe automation. Troubleshooting focuses on measuring effectiveness with evidence, reinforcing training through governance routines, and ensuring role-based training matches risk and authority levels. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:36:24 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/e9266c98/c891673d.mp3" length="38932697" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>972</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode builds on Task 21 by teaching how to create AI security awareness training that changes daily behavior, because AAISM expects you to reduce human-driven exposure through repeatable learning, not one-time policy acknowledgements. You’ll define the training outcomes that matter for the exam and for real operations: recognizing sensitive data, using approved tools, validating outputs before acting, escalating suspected misuse, and understanding how prompts and outputs can become records that create legal and privacy exposure. We’ll use realistic workplace moments—drafting emails, summarizing meetings, generating code, and handling customer chats—to show how training should target the exact decisions people make, including how to avoid prompt leakage, how to handle confidential context, and how to spot unsafe automation. Troubleshooting focuses on measuring effectiveness with evidence, reinforcing training through governance routines, and ensuring role-based training matches risk and authority levels. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/e9266c98/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 21 — Refresh training when threats, tools, and regulations change (Task 21)</title>
      <itunes:episode>21</itunes:episode>
      <podcast:episode>21</podcast:episode>
      <itunes:title>Episode 21 — Refresh training when threats, tools, and regulations change (Task 21)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6508ff2a-7f0c-49fe-8a1f-96fa3b69c4c7</guid>
      <link>https://share.transistor.fm/s/d0418cf3</link>
      <description>
        <![CDATA[<p>This episode focuses on Task 21 by showing how to refresh AI security training as threats, tools, and regulations evolve, because AAISM questions often reward the choice that sustains secure behavior over time rather than treating training as a one-and-done compliance step. You’ll learn how to define refresh triggers such as new AI features, vendor model updates, changes in data sources, emerging misuse patterns like prompt injection, and new regulatory expectations that expand documentation or transparency duties. We’ll walk through a scenario where a team shifts from internal-only AI use to customer-facing deployment, and you’ll practice deciding what must change in training content, who must be re-trained, and what evidence must be captured to prove completion and effectiveness. Best practices include role-based refresh cycles, short reinforcement messages tied to real workflows, and feedback loops that use incident learnings and audit findings to update content in measurable ways. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on Task 21 by showing how to refresh AI security training as threats, tools, and regulations evolve, because AAISM questions often reward the choice that sustains secure behavior over time rather than treating training as a one-and-done compliance step. You’ll learn how to define refresh triggers such as new AI features, vendor model updates, changes in data sources, emerging misuse patterns like prompt injection, and new regulatory expectations that expand documentation or transparency duties. We’ll walk through a scenario where a team shifts from internal-only AI use to customer-facing deployment, and you’ll practice deciding what must change in training content, who must be re-trained, and what evidence must be captured to prove completion and effectiveness. Best practices include role-based refresh cycles, short reinforcement messages tied to real workflows, and feedback loops that use incident learnings and audit findings to update content in measurable ways. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:36:42 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/d0418cf3/6941045f.mp3" length="36144905" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>903</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on Task 21 by showing how to refresh AI security training as threats, tools, and regulations evolve, because AAISM questions often reward the choice that sustains secure behavior over time rather than treating training as a one-and-done compliance step. You’ll learn how to define refresh triggers such as new AI features, vendor model updates, changes in data sources, emerging misuse patterns like prompt injection, and new regulatory expectations that expand documentation or transparency duties. We’ll walk through a scenario where a team shifts from internal-only AI use to customer-facing deployment, and you’ll practice deciding what must change in training content, who must be re-trained, and what evidence must be captured to prove completion and effectiveness. Best practices include role-based refresh cycles, short reinforcement messages tied to real workflows, and feedback loops that use incident learnings and audit findings to update content in measurable ways. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/d0418cf3/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 22 — Inventory AI assets: models, prompts, data, and key dependencies (Task 13)</title>
      <itunes:episode>22</itunes:episode>
      <podcast:episode>22</podcast:episode>
      <itunes:title>Episode 22 — Inventory AI assets: models, prompts, data, and key dependencies (Task 13)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">260a9229-188f-4e72-a06a-e8330a508abb</guid>
      <link>https://share.transistor.fm/s/0cb6b3fe</link>
      <description>
        <![CDATA[<p>This episode teaches Task 13 by explaining how to inventory AI assets in a way that supports governance, risk decisions, and exam-ready control evidence, because AAISM treats “you can’t secure what you don’t know you have” as a foundational truth. You’ll define what counts as an AI asset beyond the model itself, including prompts and prompt templates, embeddings and vector stores, training and evaluation datasets, inference endpoints, pipelines, access paths, and third-party dependencies like hosted APIs and SaaS connectors. We’ll use an enterprise assistant scenario to map data flows and identify hidden dependencies that become exam-relevant risk points, such as external logging, plugin permissions, and shadow usage by teams outside the original rollout. You’ll also learn how asset inventory ties directly to access control, monitoring scope, incident response readiness, and compliance reporting, so inventory is treated as an operational control, not a spreadsheet. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches Task 13 by explaining how to inventory AI assets in a way that supports governance, risk decisions, and exam-ready control evidence, because AAISM treats “you can’t secure what you don’t know you have” as a foundational truth. You’ll define what counts as an AI asset beyond the model itself, including prompts and prompt templates, embeddings and vector stores, training and evaluation datasets, inference endpoints, pipelines, access paths, and third-party dependencies like hosted APIs and SaaS connectors. We’ll use an enterprise assistant scenario to map data flows and identify hidden dependencies that become exam-relevant risk points, such as external logging, plugin permissions, and shadow usage by teams outside the original rollout. You’ll also learn how asset inventory ties directly to access control, monitoring scope, incident response readiness, and compliance reporting, so inventory is treated as an operational control, not a spreadsheet. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:36:57 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/0cb6b3fe/64710ebf.mp3" length="42755982" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1068</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches Task 13 by explaining how to inventory AI assets in a way that supports governance, risk decisions, and exam-ready control evidence, because AAISM treats “you can’t secure what you don’t know you have” as a foundational truth. You’ll define what counts as an AI asset beyond the model itself, including prompts and prompt templates, embeddings and vector stores, training and evaluation datasets, inference endpoints, pipelines, access paths, and third-party dependencies like hosted APIs and SaaS connectors. We’ll use an enterprise assistant scenario to map data flows and identify hidden dependencies that become exam-relevant risk points, such as external logging, plugin permissions, and shadow usage by teams outside the original rollout. You’ll also learn how asset inventory ties directly to access control, monitoring scope, incident response readiness, and compliance reporting, so inventory is treated as an operational control, not a spreadsheet. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/0cb6b3fe/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 23 — Classify AI assets by sensitivity, criticality, and compliance scope (Task 13)</title>
      <itunes:episode>23</itunes:episode>
      <podcast:episode>23</podcast:episode>
      <itunes:title>Episode 23 — Classify AI assets by sensitivity, criticality, and compliance scope (Task 13)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">11a306c2-fbfa-48b2-9219-71796493d9cc</guid>
      <link>https://share.transistor.fm/s/a09395a6</link>
      <description>
        <![CDATA[<p>This episode expands Task 13 by showing how to classify AI assets using sensitivity, criticality, and compliance scope, because AAISM questions frequently ask you to choose controls and governance actions that match the asset’s impact if it fails, leaks, or behaves unexpectedly. You’ll define classification dimensions that matter for AI systems, including data confidentiality in prompts and outputs, integrity requirements for decision support, availability needs for business operations, and regulatory obligations based on jurisdictions, user populations, or data categories. We’ll work through a scenario where the same model supports both internal drafting and regulated customer interactions, and you’ll practice classifying the model, its datasets, and its inference logs differently based on how they are used and who can access them. Best practices include aligning AI classifications to existing enterprise data classification schemes, documenting rationale so it is auditable, and using classification to drive access reviews, monitoring depth, retention rules, and incident prioritization. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode expands Task 13 by showing how to classify AI assets using sensitivity, criticality, and compliance scope, because AAISM questions frequently ask you to choose controls and governance actions that match the asset’s impact if it fails, leaks, or behaves unexpectedly. You’ll define classification dimensions that matter for AI systems, including data confidentiality in prompts and outputs, integrity requirements for decision support, availability needs for business operations, and regulatory obligations based on jurisdictions, user populations, or data categories. We’ll work through a scenario where the same model supports both internal drafting and regulated customer interactions, and you’ll practice classifying the model, its datasets, and its inference logs differently based on how they are used and who can access them. Best practices include aligning AI classifications to existing enterprise data classification schemes, documenting rationale so it is auditable, and using classification to drive access reviews, monitoring depth, retention rules, and incident prioritization. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:37:14 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/a09395a6/6ed0e2c8.mp3" length="37013231" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>924</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode expands Task 13 by showing how to classify AI assets using sensitivity, criticality, and compliance scope, because AAISM questions frequently ask you to choose controls and governance actions that match the asset’s impact if it fails, leaks, or behaves unexpectedly. You’ll define classification dimensions that matter for AI systems, including data confidentiality in prompts and outputs, integrity requirements for decision support, availability needs for business operations, and regulatory obligations based on jurisdictions, user populations, or data categories. We’ll work through a scenario where the same model supports both internal drafting and regulated customer interactions, and you’ll practice classifying the model, its datasets, and its inference logs differently based on how they are used and who can access them. Best practices include aligning AI classifications to existing enterprise data classification schemes, documenting rationale so it is auditable, and using classification to drive access reviews, monitoring depth, retention rules, and incident prioritization. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/a09395a6/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 24 — Keep the AI inventory accurate with routine governance checks (Task 13)</title>
      <itunes:episode>24</itunes:episode>
      <podcast:episode>24</podcast:episode>
      <itunes:title>Episode 24 — Keep the AI inventory accurate with routine governance checks (Task 13)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c914a267-0d0f-496b-9446-76a33e384158</guid>
      <link>https://share.transistor.fm/s/b9d2bf4c</link>
      <description>
        <![CDATA[<p>This episode covers how to keep the AI inventory accurate through routine governance checks, reinforcing Task 13 with the exam-critical idea that inventories decay unless they are embedded into change management, vendor oversight, and operational review cycles. You’ll learn how governance routines detect drift such as new integrations, expanded data access, model swaps, feature flags that change behavior, and shadow deployments that bypass formal intake. We’ll use a practical scenario where a team adds a new plugin to improve productivity, but the change quietly expands the assistant’s access to sensitive repositories, and you’ll practice the correct governance response: update inventory records, re-run relevant assessments, adjust monitoring, and validate that approvals and policies still apply. Troubleshooting focuses on the most common failure patterns, including inventories owned by no one, updates that rely on manual memory, and inventories that omit prompts, datasets, or logging destinations that later become the root cause of an incident. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode covers how to keep the AI inventory accurate through routine governance checks, reinforcing Task 13 with the exam-critical idea that inventories decay unless they are embedded into change management, vendor oversight, and operational review cycles. You’ll learn how governance routines detect drift such as new integrations, expanded data access, model swaps, feature flags that change behavior, and shadow deployments that bypass formal intake. We’ll use a practical scenario where a team adds a new plugin to improve productivity, but the change quietly expands the assistant’s access to sensitive repositories, and you’ll practice the correct governance response: update inventory records, re-run relevant assessments, adjust monitoring, and validate that approvals and policies still apply. Troubleshooting focuses on the most common failure patterns, including inventories owned by no one, updates that rely on manual memory, and inventories that omit prompts, datasets, or logging destinations that later become the root cause of an incident. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:37:29 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/b9d2bf4c/34f09ea1.mp3" length="34717576" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>867</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode covers how to keep the AI inventory accurate through routine governance checks, reinforcing Task 13 with the exam-critical idea that inventories decay unless they are embedded into change management, vendor oversight, and operational review cycles. You’ll learn how governance routines detect drift such as new integrations, expanded data access, model swaps, feature flags that change behavior, and shadow deployments that bypass formal intake. We’ll use a practical scenario where a team adds a new plugin to improve productivity, but the change quietly expands the assistant’s access to sensitive repositories, and you’ll practice the correct governance response: update inventory records, re-run relevant assessments, adjust monitoring, and validate that approvals and policies still apply. Troubleshooting focuses on the most common failure patterns, including inventories owned by no one, updates that rely on manual memory, and inventories that omit prompts, datasets, or logging destinations that later become the root cause of an incident. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b9d2bf4c/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 25 — Identify data risks across the AI life cycle: leaks and tampering (Task 14)</title>
      <itunes:episode>25</itunes:episode>
      <podcast:episode>25</podcast:episode>
      <itunes:title>Episode 25 — Identify data risks across the AI life cycle: leaks and tampering (Task 14)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">40a76f0e-348b-40b8-925c-921fa36f8c94</guid>
      <link>https://share.transistor.fm/s/712bb8d6</link>
      <description>
        <![CDATA[<p>This episode targets Task 14 by teaching you to identify data risks across the AI life cycle, with a focus on leaks and tampering, because AAISM expects you to reason about where data can be exposed or altered from intake through training, evaluation, deployment, and ongoing operations. You’ll define key risk types such as unauthorized disclosure through prompts and outputs, exposure through logs and telemetry, poisoning or manipulation of training data, and integrity failures that lead to unsafe or misleading outputs. We’ll walk through scenarios including a vendor-hosted model that stores conversation history, a dataset sourced from multiple business units with inconsistent controls, and a retraining event that introduces unvetted external content. Exam practice emphasizes selecting the best next action that establishes control and evidence, such as tightening data handling rules, adding validation steps, restricting access paths, and documenting decisions so risk acceptance is explicit rather than accidental. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode targets Task 14 by teaching you to identify data risks across the AI life cycle, with a focus on leaks and tampering, because AAISM expects you to reason about where data can be exposed or altered from intake through training, evaluation, deployment, and ongoing operations. You’ll define key risk types such as unauthorized disclosure through prompts and outputs, exposure through logs and telemetry, poisoning or manipulation of training data, and integrity failures that lead to unsafe or misleading outputs. We’ll walk through scenarios including a vendor-hosted model that stores conversation history, a dataset sourced from multiple business units with inconsistent controls, and a retraining event that introduces unvetted external content. Exam practice emphasizes selecting the best next action that establishes control and evidence, such as tightening data handling rules, adding validation steps, restricting access paths, and documenting decisions so risk acceptance is explicit rather than accidental. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:37:48 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/712bb8d6/3a55979b.mp3" length="38562809" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>963</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode targets Task 14 by teaching you to identify data risks across the AI life cycle, with a focus on leaks and tampering, because AAISM expects you to reason about where data can be exposed or altered from intake through training, evaluation, deployment, and ongoing operations. You’ll define key risk types such as unauthorized disclosure through prompts and outputs, exposure through logs and telemetry, poisoning or manipulation of training data, and integrity failures that lead to unsafe or misleading outputs. We’ll walk through scenarios including a vendor-hosted model that stores conversation history, a dataset sourced from multiple business units with inconsistent controls, and a retraining event that introduces unvetted external content. Exam practice emphasizes selecting the best next action that establishes control and evidence, such as tightening data handling rules, adding validation steps, restricting access paths, and documenting decisions so risk acceptance is explicit rather than accidental. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/712bb8d6/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 26 — Protect training and test data with access control and secure storage (Task 14)</title>
      <itunes:episode>26</itunes:episode>
      <podcast:episode>26</podcast:episode>
      <itunes:title>Episode 26 — Protect training and test data with access control and secure storage (Task 14)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">8a5b9c12-d925-4c6e-bdeb-217ec57635cb</guid>
      <link>https://share.transistor.fm/s/8e19de4b</link>
      <description>
        <![CDATA[<p>This episode explains how to protect training and test data using access control and secure storage, aligning to Task 14 and preparing you for AAISM questions where the strongest answer limits exposure, enforces least privilege, and produces audit-ready evidence. You’ll learn how to define who should access training datasets, evaluation sets, labels, and feature stores, and how to separate duties so developers, data engineers, and operators only see what they need. We’ll use an example of fine-tuning a model on customer support transcripts to explore secure storage expectations, encryption and key management considerations, and how to prevent accidental leakage through staging environments or shared workspaces. Troubleshooting focuses on common breakdowns like copying production datasets to personal drives, mixing test and production data, and granting broad access “temporarily” during deadlines, then forgetting to revoke it. The exam-relevant skill is choosing controls that are enforceable and verifiable, not merely recommended. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains how to protect training and test data using access control and secure storage, aligning to Task 14 and preparing you for AAISM questions where the strongest answer limits exposure, enforces least privilege, and produces audit-ready evidence. You’ll learn how to define who should access training datasets, evaluation sets, labels, and feature stores, and how to separate duties so developers, data engineers, and operators only see what they need. We’ll use an example of fine-tuning a model on customer support transcripts to explore secure storage expectations, encryption and key management considerations, and how to prevent accidental leakage through staging environments or shared workspaces. Troubleshooting focuses on common breakdowns like copying production datasets to personal drives, mixing test and production data, and granting broad access “temporarily” during deadlines, then forgetting to revoke it. The exam-relevant skill is choosing controls that are enforceable and verifiable, not merely recommended. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:38:00 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/8e19de4b/d6b44c2d.mp3" length="46721380" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1167</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains how to protect training and test data using access control and secure storage, aligning to Task 14 and preparing you for AAISM questions where the strongest answer limits exposure, enforces least privilege, and produces audit-ready evidence. You’ll learn how to define who should access training datasets, evaluation sets, labels, and feature stores, and how to separate duties so developers, data engineers, and operators only see what they need. We’ll use an example of fine-tuning a model on customer support transcripts to explore secure storage expectations, encryption and key management considerations, and how to prevent accidental leakage through staging environments or shared workspaces. Troubleshooting focuses on common breakdowns like copying production datasets to personal drives, mixing test and production data, and granting broad access “temporarily” during deadlines, then forgetting to revoke it. The exam-relevant skill is choosing controls that are enforceable and verifiable, not merely recommended. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/8e19de4b/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 27 — Preserve data integrity so models stay reliable and trustworthy (Task 14)</title>
      <itunes:episode>27</itunes:episode>
      <podcast:episode>27</podcast:episode>
      <itunes:title>Episode 27 — Preserve data integrity so models stay reliable and trustworthy (Task 14)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">286c5bb9-1c2f-425e-a82f-7f139f178e84</guid>
      <link>https://share.transistor.fm/s/cfe281cd</link>
      <description>
        <![CDATA[<p>This episode focuses on preserving data integrity so models remain reliable, which is central to Task 14 because AAISM treats integrity failures as both a security problem and a governance problem when decisions depend on model outputs. You’ll define integrity controls such as dataset versioning, provenance tracking, validation checks, change approvals, and monitoring signals that detect unexpected shifts in data distributions or labeling quality. We’ll work through a scenario where a model’s output quality degrades after a pipeline change, and you’ll practice tracing the issue back to data integrity causes like corrupted records, unauthorized modifications, or subtle poisoning introduced through third-party feeds. Best practices include separating trusted from untrusted sources, using controlled promotion from development to production datasets, and documenting integrity checks so reviewers can verify that training and evaluation were performed on known-good data. On the exam, you’ll learn to favor answers that create repeatable integrity assurance over answers that only re-train and hope. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on preserving data integrity so models remain reliable, which is central to Task 14 because AAISM treats integrity failures as both a security problem and a governance problem when decisions depend on model outputs. You’ll define integrity controls such as dataset versioning, provenance tracking, validation checks, change approvals, and monitoring signals that detect unexpected shifts in data distributions or labeling quality. We’ll work through a scenario where a model’s output quality degrades after a pipeline change, and you’ll practice tracing the issue back to data integrity causes like corrupted records, unauthorized modifications, or subtle poisoning introduced through third-party feeds. Best practices include separating trusted from untrusted sources, using controlled promotion from development to production datasets, and documenting integrity checks so reviewers can verify that training and evaluation were performed on known-good data. On the exam, you’ll learn to favor answers that create repeatable integrity assurance over answers that only re-train and hope. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:38:19 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/cfe281cd/e26c1afc.mp3" length="38281727" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>956</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on preserving data integrity so models remain reliable, which is central to Task 14 because AAISM treats integrity failures as both a security problem and a governance problem when decisions depend on model outputs. You’ll define integrity controls such as dataset versioning, provenance tracking, validation checks, change approvals, and monitoring signals that detect unexpected shifts in data distributions or labeling quality. We’ll work through a scenario where a model’s output quality degrades after a pipeline change, and you’ll practice tracing the issue back to data integrity causes like corrupted records, unauthorized modifications, or subtle poisoning introduced through third-party feeds. Best practices include separating trusted from untrusted sources, using controlled promotion from development to production datasets, and documenting integrity checks so reviewers can verify that training and evaluation were performed on known-good data. On the exam, you’ll learn to favor answers that create repeatable integrity assurance over answers that only re-train and hope. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/cfe281cd/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 28 — Manage retention and deletion to reduce long-term AI data exposure (Task 14)</title>
      <itunes:episode>28</itunes:episode>
      <podcast:episode>28</podcast:episode>
      <itunes:title>Episode 28 — Manage retention and deletion to reduce long-term AI data exposure (Task 14)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ad534458-f180-44e5-81b3-d30388c14ef5</guid>
      <link>https://share.transistor.fm/s/f0c75681</link>
      <description>
        <![CDATA[<p>This episode teaches Task 14 through retention and deletion discipline, because AI systems tend to accumulate prompts, outputs, logs, and derived artifacts that quietly expand exposure over time, and AAISM questions often test whether you can reduce that long-term risk with defensible rules. You’ll define what must be retained for security monitoring, incident response, audit, and regulatory requirements, and what should be minimized or deleted to reduce breach impact and privacy risk. We’ll use scenarios like storing conversation history for model improvement, retaining inference logs for investigations, and handling deletion requests when prompts include personal or confidential data, then connect those scenarios to policy, technical controls, and evidence capture. Troubleshooting covers over-retention due to vague defaults, inconsistent deletion across vendors and internal systems, and retention rules that conflict with legal holds or regulatory timelines, all of which can appear in exam-style tradeoff questions. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches Task 14 through retention and deletion discipline, because AI systems tend to accumulate prompts, outputs, logs, and derived artifacts that quietly expand exposure over time, and AAISM questions often test whether you can reduce that long-term risk with defensible rules. You’ll define what must be retained for security monitoring, incident response, audit, and regulatory requirements, and what should be minimized or deleted to reduce breach impact and privacy risk. We’ll use scenarios like storing conversation history for model improvement, retaining inference logs for investigations, and handling deletion requests when prompts include personal or confidential data, then connect those scenarios to policy, technical controls, and evidence capture. Troubleshooting covers over-retention due to vague defaults, inconsistent deletion across vendors and internal systems, and retention rules that conflict with legal holds or regulatory timelines, all of which can appear in exam-style tradeoff questions. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:38:31 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/f0c75681/de43e2d0.mp3" length="38386223" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>959</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches Task 14 through retention and deletion discipline, because AI systems tend to accumulate prompts, outputs, logs, and derived artifacts that quietly expand exposure over time, and AAISM questions often test whether you can reduce that long-term risk with defensible rules. You’ll define what must be retained for security monitoring, incident response, audit, and regulatory requirements, and what should be minimized or deleted to reduce breach impact and privacy risk. We’ll use scenarios like storing conversation history for model improvement, retaining inference logs for investigations, and handling deletion requests when prompts include personal or confidential data, then connect those scenarios to policy, technical controls, and evidence capture. Troubleshooting covers over-retention due to vague defaults, inconsistent deletion across vendors and internal systems, and retention rules that conflict with legal holds or regulatory timelines, all of which can appear in exam-style tradeoff questions. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/f0c75681/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 29 — Build an AI security program that fits the enterprise security program (Task 19)</title>
      <itunes:episode>29</itunes:episode>
      <podcast:episode>29</podcast:episode>
      <itunes:title>Episode 29 — Build an AI security program that fits the enterprise security program (Task 19)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e6dfcedf-c8cc-4c86-8289-e1133f3049a1</guid>
      <link>https://share.transistor.fm/s/452396b2</link>
      <description>
        <![CDATA[<p>This episode addresses Task 19 by showing how to build an AI security program that fits into the enterprise security program instead of competing with it, because AAISM emphasizes alignment with existing governance, risk, and control structures to avoid gaps and duplicated effort. You’ll learn how to integrate AI-specific concerns—like model changes, prompt handling, and output safety—into established processes such as risk assessments, change management, incident response, vendor management, and security monitoring. We’ll explore a scenario where an AI initiative bypasses standard controls for speed, creating shadow data flows and unmanaged vendor dependencies, and you’ll practice selecting the governance and control actions that bring the program back into alignment without stopping delivery. Best practices include reusing enterprise control families where possible, defining AI-specific extensions where necessary, and ensuring reporting and metrics roll up into leadership dashboards that already drive action. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode addresses Task 19 by showing how to build an AI security program that fits into the enterprise security program instead of competing with it, because AAISM emphasizes alignment with existing governance, risk, and control structures to avoid gaps and duplicated effort. You’ll learn how to integrate AI-specific concerns—like model changes, prompt handling, and output safety—into established processes such as risk assessments, change management, incident response, vendor management, and security monitoring. We’ll explore a scenario where an AI initiative bypasses standard controls for speed, creating shadow data flows and unmanaged vendor dependencies, and you’ll practice selecting the governance and control actions that bring the program back into alignment without stopping delivery. Best practices include reusing enterprise control families where possible, defining AI-specific extensions where necessary, and ensuring reporting and metrics roll up into leadership dashboards that already drive action. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:38:44 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/452396b2/6eef9f8c.mp3" length="36909790" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>922</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode addresses Task 19 by showing how to build an AI security program that fits into the enterprise security program instead of competing with it, because AAISM emphasizes alignment with existing governance, risk, and control structures to avoid gaps and duplicated effort. You’ll learn how to integrate AI-specific concerns—like model changes, prompt handling, and output safety—into established processes such as risk assessments, change management, incident response, vendor management, and security monitoring. We’ll explore a scenario where an AI initiative bypasses standard controls for speed, creating shadow data flows and unmanaged vendor dependencies, and you’ll practice selecting the governance and control actions that bring the program back into alignment without stopping delivery. Best practices include reusing enterprise control families where possible, defining AI-specific extensions where necessary, and ensuring reporting and metrics roll up into leadership dashboards that already drive action. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/452396b2/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 30 — Define AI security metrics leaders can understand and act on (Task 18)</title>
      <itunes:episode>30</itunes:episode>
      <podcast:episode>30</podcast:episode>
      <itunes:title>Episode 30 — Define AI security metrics leaders can understand and act on (Task 18)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">056ab137-b229-49b0-8f9d-38b7ec333361</guid>
      <link>https://share.transistor.fm/s/7fd1f2e8</link>
      <description>
        <![CDATA[<p>This episode focuses on Task 18 by teaching you to define AI security metrics that leaders can use to make decisions, because AAISM favors measurable, outcome-linked reporting over technical noise that cannot drive prioritization or accountability. You’ll learn how to select metrics that reflect governance health, risk exposure, and control performance, such as inventory completeness, assessment coverage for high-impact use cases, access review outcomes, model change review compliance, monitoring signal quality, incident trends, and time-to-remediate for AI-specific findings. We’ll work through a scenario where leadership wants proof that AI rollout is “safe,” and you’ll practice converting that vague request into clear metrics with targets, thresholds, and escalation triggers that map to tasks and control owners. Troubleshooting covers vanity metrics, inconsistent measurement across teams, and reports that do not connect to actions, because on the exam the best answer is the one that supports decisions and accountability. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on Task 18 by teaching you to define AI security metrics that leaders can use to make decisions, because AAISM favors measurable, outcome-linked reporting over technical noise that cannot drive prioritization or accountability. You’ll learn how to select metrics that reflect governance health, risk exposure, and control performance, such as inventory completeness, assessment coverage for high-impact use cases, access review outcomes, model change review compliance, monitoring signal quality, incident trends, and time-to-remediate for AI-specific findings. We’ll work through a scenario where leadership wants proof that AI rollout is “safe,” and you’ll practice converting that vague request into clear metrics with targets, thresholds, and escalation triggers that map to tasks and control owners. Troubleshooting covers vanity metrics, inconsistent measurement across teams, and reports that do not connect to actions, because on the exam the best answer is the one that supports decisions and accountability. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:39:02 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/7fd1f2e8/0dc3e60b.mp3" length="34525313" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>862</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on Task 18 by teaching you to define AI security metrics that leaders can use to make decisions, because AAISM favors measurable, outcome-linked reporting over technical noise that cannot drive prioritization or accountability. You’ll learn how to select metrics that reflect governance health, risk exposure, and control performance, such as inventory completeness, assessment coverage for high-impact use cases, access review outcomes, model change review compliance, monitoring signal quality, incident trends, and time-to-remediate for AI-specific findings. We’ll work through a scenario where leadership wants proof that AI rollout is “safe,” and you’ll practice converting that vague request into clear metrics with targets, thresholds, and escalation triggers that map to tasks and control owners. Troubleshooting covers vanity metrics, inconsistent measurement across teams, and reports that do not connect to actions, because on the exam the best answer is the one that supports decisions and accountability. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/7fd1f2e8/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 31 — Monitor AI metrics to spot misuse, drift, and early incident signals (Task 18)</title>
      <itunes:episode>31</itunes:episode>
      <podcast:episode>31</podcast:episode>
      <itunes:title>Episode 31 — Monitor AI metrics to spot misuse, drift, and early incident signals (Task 18)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2ec9312f-739f-46db-9b82-095085099dcd</guid>
      <link>https://share.transistor.fm/s/2a2888bd</link>
      <description>
        <![CDATA[<p>This episode explains how continuous monitoring turns AI security metrics into early warning signals, which is exactly what Task 18 is getting at when AAISM questions ask what you should measure and how you should respond when behavior changes. You’ll connect leading indicators like unusual prompt volume, spikes in denied requests, abnormal data access patterns, output toxicity flags, and sudden shifts in response quality to practical causes such as misuse, prompt injection attempts, configuration changes, model drift, or logging failures. We’ll walk through a scenario where a customer-facing assistant begins producing inconsistent answers after a vendor model update, and you’ll practice deciding what to validate first, how to separate real drift from measurement noise, and how to document the decision path so it is defensible. Best practices include defining thresholds and escalation triggers in advance, ensuring metrics map to control objectives, and avoiding “monitoring theater” where dashboards exist without owners or response playbooks. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains how continuous monitoring turns AI security metrics into early warning signals, which is exactly what Task 18 is getting at when AAISM questions ask what you should measure and how you should respond when behavior changes. You’ll connect leading indicators like unusual prompt volume, spikes in denied requests, abnormal data access patterns, output toxicity flags, and sudden shifts in response quality to practical causes such as misuse, prompt injection attempts, configuration changes, model drift, or logging failures. We’ll walk through a scenario where a customer-facing assistant begins producing inconsistent answers after a vendor model update, and you’ll practice deciding what to validate first, how to separate real drift from measurement noise, and how to document the decision path so it is defensible. Best practices include defining thresholds and escalation triggers in advance, ensuring metrics map to control objectives, and avoiding “monitoring theater” where dashboards exist without owners or response playbooks. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:39:14 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/2a2888bd/78c2b493.mp3" length="35840856" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>895</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains how continuous monitoring turns AI security metrics into early warning signals, which is exactly what Task 18 is getting at when AAISM questions ask what you should measure and how you should respond when behavior changes. You’ll connect leading indicators like unusual prompt volume, spikes in denied requests, abnormal data access patterns, output toxicity flags, and sudden shifts in response quality to practical causes such as misuse, prompt injection attempts, configuration changes, model drift, or logging failures. We’ll walk through a scenario where a customer-facing assistant begins producing inconsistent answers after a vendor model update, and you’ll practice deciding what to validate first, how to separate real drift from measurement noise, and how to document the decision path so it is defensible. Best practices include defining thresholds and escalation triggers in advance, ensuring metrics map to control objectives, and avoiding “monitoring theater” where dashboards exist without owners or response playbooks. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/2a2888bd/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 32 — Use metrics to prioritize work and prove security program value (Task 18)</title>
      <itunes:episode>32</itunes:episode>
      <podcast:episode>32</podcast:episode>
      <itunes:title>Episode 32 — Use metrics to prioritize work and prove security program value (Task 18)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">188533c9-7062-431e-bc0c-ae2695906339</guid>
      <link>https://share.transistor.fm/s/183e6151</link>
      <description>
        <![CDATA[<p>This episode teaches how to use AI security metrics to prioritize work and demonstrate program value, aligning with Task 18 and preparing you for AAISM items where the best answer connects measurement to decisions, resource allocation, and risk reduction. You’ll learn how to translate raw signals into action, such as using inventory coverage and assessment completion rates to identify uncontrolled systems, using incident trends and time-to-remediate to justify investment, and using access review results to focus on the highest-risk permissions first. We’ll use a scenario where leadership asks whether AI rollout is “under control,” and you’ll build a defensible story that ties metrics to governance routines, control performance, and outcomes that matter to stakeholders, including reduced exposure, improved detection, and faster containment. Troubleshooting covers common mistakes like choosing vanity metrics, reporting without thresholds, and failing to link metrics to owners and playbooks, which leads to repeated findings and unclear accountability. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches how to use AI security metrics to prioritize work and demonstrate program value, aligning with Task 18 and preparing you for AAISM items where the best answer connects measurement to decisions, resource allocation, and risk reduction. You’ll learn how to translate raw signals into action, such as using inventory coverage and assessment completion rates to identify uncontrolled systems, using incident trends and time-to-remediate to justify investment, and using access review results to focus on the highest-risk permissions first. We’ll use a scenario where leadership asks whether AI rollout is “under control,” and you’ll build a defensible story that ties metrics to governance routines, control performance, and outcomes that matter to stakeholders, including reduced exposure, improved detection, and faster containment. Troubleshooting covers common mistakes like choosing vanity metrics, reporting without thresholds, and failing to link metrics to owners and playbooks, which leads to repeated findings and unclear accountability. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:39:35 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/183e6151/4be37bde.mp3" length="42186511" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1054</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches how to use AI security metrics to prioritize work and demonstrate program value, aligning with Task 18 and preparing you for AAISM items where the best answer connects measurement to decisions, resource allocation, and risk reduction. You’ll learn how to translate raw signals into action, such as using inventory coverage and assessment completion rates to identify uncontrolled systems, using incident trends and time-to-remediate to justify investment, and using access review results to focus on the highest-risk permissions first. We’ll use a scenario where leadership asks whether AI rollout is “under control,” and you’ll build a defensible story that ties metrics to governance routines, control performance, and outcomes that matter to stakeholders, including reduced exposure, improved detection, and faster containment. Troubleshooting covers common mistakes like choosing vanity metrics, reporting without thresholds, and failing to link metrics to owners and playbooks, which leads to repeated findings and unclear accountability. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/183e6151/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 33 — Review AI security tools by coverage, gaps, and operational fit (Task 19)</title>
      <itunes:episode>33</itunes:episode>
      <podcast:episode>33</podcast:episode>
      <itunes:title>Episode 33 — Review AI security tools by coverage, gaps, and operational fit (Task 19)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">40a7a0bb-a8e2-4f80-8419-3d3b88fc3f92</guid>
      <link>https://share.transistor.fm/s/95f76ba5</link>
      <description>
        <![CDATA[<p>This episode focuses on Task 19 by showing how to review AI security tools based on coverage, gaps, and operational fit, because AAISM expects you to choose controls that work in real environments, integrate with existing operations, and produce evidence rather than buying tools that look impressive but don’t reduce risk. You’ll define what “coverage” means for AI systems, including visibility into prompts and outputs, access and authentication events, model change activity, data movement, and safety signals, and you’ll learn how to identify gaps such as blind spots in third-party hosted services or missing telemetry for plugins and connectors. We’ll work through a selection scenario where multiple tool options exist, and you’ll practice evaluating integration complexity, ownership requirements, false positive risk, and how each tool supports monitoring, incident response, and audit reporting. The exam-relevant habit is to pick the tool approach that closes the highest-risk gap with measurable outcomes and maintainable operations. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on Task 19 by showing how to review AI security tools based on coverage, gaps, and operational fit, because AAISM expects you to choose controls that work in real environments, integrate with existing operations, and produce evidence rather than buying tools that look impressive but don’t reduce risk. You’ll define what “coverage” means for AI systems, including visibility into prompts and outputs, access and authentication events, model change activity, data movement, and safety signals, and you’ll learn how to identify gaps such as blind spots in third-party hosted services or missing telemetry for plugins and connectors. We’ll work through a selection scenario where multiple tool options exist, and you’ll practice evaluating integration complexity, ownership requirements, false positive risk, and how each tool supports monitoring, incident response, and audit reporting. The exam-relevant habit is to pick the tool approach that closes the highest-risk gap with measurable outcomes and maintainable operations. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:39:48 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/95f76ba5/bf04feb0.mp3" length="43435164" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1085</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on Task 19 by showing how to review AI security tools based on coverage, gaps, and operational fit, because AAISM expects you to choose controls that work in real environments, integrate with existing operations, and produce evidence rather than buying tools that look impressive but don’t reduce risk. You’ll define what “coverage” means for AI systems, including visibility into prompts and outputs, access and authentication events, model change activity, data movement, and safety signals, and you’ll learn how to identify gaps such as blind spots in third-party hosted services or missing telemetry for plugins and connectors. We’ll work through a selection scenario where multiple tool options exist, and you’ll practice evaluating integration complexity, ownership requirements, false positive risk, and how each tool supports monitoring, incident response, and audit reporting. The exam-relevant habit is to pick the tool approach that closes the highest-risk gap with measurable outcomes and maintainable operations. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Episode 34 — Implement AI security tools into monitoring, alerting, and response workflows (Task 19)</title>
      <itunes:episode>34</itunes:episode>
      <podcast:episode>34</podcast:episode>
      <itunes:title>Episode 34 — Implement AI security tools into monitoring, alerting, and response workflows (Task 19)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a701ab9f-11e7-497d-afab-a51d25e92e58</guid>
      <link>https://share.transistor.fm/s/e82d5ef3</link>
      <description>
        <![CDATA[<p>This episode explains how to implement AI security tools so they actually function inside monitoring, alerting, and response workflows, aligning to Task 19 and reflecting how AAISM rewards integration and accountability over standalone tooling. You’ll learn how to connect AI telemetry to your existing security operations processes, including how alerts are triaged, who owns investigation steps, what evidence is collected, and how incidents are escalated and documented. We’ll use a scenario where a new generative AI service is introduced with limited default logging, and you’ll practice deciding what data to capture, how to route it to the right monitoring systems, and how to build detections that are specific enough to be actionable without overwhelming analysts. Troubleshooting covers common rollout failures such as missing runbooks, unclear alert ownership, misaligned severity thresholds, and weak change control that causes detections to break after model updates, all of which can show up as “most effective next step” questions. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains how to implement AI security tools so they actually function inside monitoring, alerting, and response workflows, aligning to Task 19 and reflecting how AAISM rewards integration and accountability over standalone tooling. You’ll learn how to connect AI telemetry to your existing security operations processes, including how alerts are triaged, who owns investigation steps, what evidence is collected, and how incidents are escalated and documented. We’ll use a scenario where a new generative AI service is introduced with limited default logging, and you’ll practice deciding what data to capture, how to route it to the right monitoring systems, and how to build detections that are specific enough to be actionable without overwhelming analysts. Troubleshooting covers common rollout failures such as missing runbooks, unclear alert ownership, misaligned severity thresholds, and weak change control that causes detections to break after model updates, all of which can show up as “most effective next step” questions. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:40:06 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/e82d5ef3/d9cb9706.mp3" length="38468792" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>961</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains how to implement AI security tools so they actually function inside monitoring, alerting, and response workflows, aligning to Task 19 and reflecting how AAISM rewards integration and accountability over standalone tooling. You’ll learn how to connect AI telemetry to your existing security operations processes, including how alerts are triaged, who owns investigation steps, what evidence is collected, and how incidents are escalated and documented. We’ll use a scenario where a new generative AI service is introduced with limited default logging, and you’ll practice deciding what data to capture, how to route it to the right monitoring systems, and how to build detections that are specific enough to be actionable without overwhelming analysts. Troubleshooting covers common rollout failures such as missing runbooks, unclear alert ownership, misaligned severity thresholds, and weak change control that causes detections to break after model updates, all of which can show up as “most effective next step” questions. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/e82d5ef3/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 35 — Operationalize tools with tuning, ownership, and measurable outcomes (Task 19)</title>
      <itunes:episode>35</itunes:episode>
      <podcast:episode>35</podcast:episode>
      <itunes:title>Episode 35 — Operationalize tools with tuning, ownership, and measurable outcomes (Task 19)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">054fd29a-dc54-4710-b2ba-f240ccccb3ca</guid>
      <link>https://share.transistor.fm/s/24128eb9</link>
      <description>
        <![CDATA[<p>This episode covers the operational reality of AI security tools, emphasizing Task 19 by showing that tools only reduce risk when they are tuned, owned, and measured over time, which is why AAISM questions often prefer governance and process steps that keep controls effective after deployment. You’ll learn how to establish tool ownership, define maintenance routines, and tune detections using real data so alerts are meaningful and tied to response actions, while preserving evidence for audits and post-incident review. We’ll explore a scenario where an AI misuse detection rule generates constant false positives after a workflow change, and you’ll practice troubleshooting by adjusting thresholds, validating signal sources, updating context, and documenting changes so the detection remains defensible and repeatable. Best practices include setting success criteria for tools, tracking performance metrics like alert fidelity and response time, and ensuring changes to models, prompts, or integrations trigger updates to tool configurations and runbooks. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode covers the operational reality of AI security tools, emphasizing Task 19 by showing that tools only reduce risk when they are tuned, owned, and measured over time, which is why AAISM questions often prefer governance and process steps that keep controls effective after deployment. You’ll learn how to establish tool ownership, define maintenance routines, and tune detections using real data so alerts are meaningful and tied to response actions, while preserving evidence for audits and post-incident review. We’ll explore a scenario where an AI misuse detection rule generates constant false positives after a workflow change, and you’ll practice troubleshooting by adjusting thresholds, validating signal sources, updating context, and documenting changes so the detection remains defensible and repeatable. Best practices include setting success criteria for tools, tracking performance metrics like alert fidelity and response time, and ensuring changes to models, prompts, or integrations trigger updates to tool configurations and runbooks. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:40:19 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/24128eb9/d3593d88.mp3" length="40328693" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1007</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode covers the operational reality of AI security tools, emphasizing Task 19 by showing that tools only reduce risk when they are tuned, owned, and measured over time, which is why AAISM questions often prefer governance and process steps that keep controls effective after deployment. You’ll learn how to establish tool ownership, define maintenance routines, and tune detections using real data so alerts are meaningful and tied to response actions, while preserving evidence for audits and post-incident review. We’ll explore a scenario where an AI misuse detection rule generates constant false positives after a workflow change, and you’ll practice troubleshooting by adjusting thresholds, validating signal sources, updating context, and documenting changes so the detection remains defensible and repeatable. Best practices include setting success criteria for tools, tracking performance metrics like alert fidelity and response time, and ensuring changes to models, prompts, or integrations trigger updates to tool configurations and runbooks. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/24128eb9/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 36 — Domain 1 quick review: governance, policies, assets, metrics, and training (Tasks 1–3)</title>
      <itunes:episode>36</itunes:episode>
      <podcast:episode>36</podcast:episode>
      <itunes:title>Episode 36 — Domain 1 quick review: governance, policies, assets, metrics, and training (Tasks 1–3)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">da11045e-b8d5-46b6-a3b0-c8d9ef51dfe6</guid>
      <link>https://share.transistor.fm/s/fb4f8394</link>
      <description>
        <![CDATA[<p>This episode consolidates Domain 1 by reviewing the key ideas behind Tasks 1–3, helping you connect governance leadership, policy structure, inventory discipline, metrics, and training into one coherent program model that AAISM tests through scenario-based “best answer” logic. You’ll reinforce how a governance charter sets authority and scope, how policies become enforceable standards and procedures, and how asset inventory and classification drive control selection and monitoring priorities. We’ll tie metrics to governance routines so measurement produces decisions, not just reports, and we’ll connect training and acceptable use guidance to human-driven risk controls that prevent misuse before it becomes an incident. The review uses practical examples like approving a new AI use case, responding to a vendor update, and correcting inventory drift, so you can quickly identify which task is being tested and what the most defensible response looks like in both exam and real operations. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode consolidates Domain 1 by reviewing the key ideas behind Tasks 1–3, helping you connect governance leadership, policy structure, inventory discipline, metrics, and training into one coherent program model that AAISM tests through scenario-based “best answer” logic. You’ll reinforce how a governance charter sets authority and scope, how policies become enforceable standards and procedures, and how asset inventory and classification drive control selection and monitoring priorities. We’ll tie metrics to governance routines so measurement produces decisions, not just reports, and we’ll connect training and acceptable use guidance to human-driven risk controls that prevent misuse before it becomes an incident. The review uses practical examples like approving a new AI use case, responding to a vendor update, and correcting inventory drift, so you can quickly identify which task is being tested and what the most defensible response looks like in both exam and real operations. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:40:38 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/fb4f8394/fbcc9bc5.mp3" length="37019517" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>925</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode consolidates Domain 1 by reviewing the key ideas behind Tasks 1–3, helping you connect governance leadership, policy structure, inventory discipline, metrics, and training into one coherent program model that AAISM tests through scenario-based “best answer” logic. You’ll reinforce how a governance charter sets authority and scope, how policies become enforceable standards and procedures, and how asset inventory and classification drive control selection and monitoring priorities. We’ll tie metrics to governance routines so measurement produces decisions, not just reports, and we’ll connect training and acceptable use guidance to human-driven risk controls that prevent misuse before it becomes an incident. The review uses practical examples like approving a new AI use case, responding to a vendor update, and correcting inventory drift, so you can quickly identify which task is being tested and what the most defensible response looks like in both exam and real operations. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/fb4f8394/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 37 — Investigate AI security incidents by collecting the right evidence fast (Task 15)</title>
      <itunes:episode>37</itunes:episode>
      <podcast:episode>37</podcast:episode>
      <itunes:title>Episode 37 — Investigate AI security incidents by collecting the right evidence fast (Task 15)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">777dfc41-12f2-4dcc-8528-7fe755ae1531</guid>
      <link>https://share.transistor.fm/s/70d4ddf5</link>
      <description>
        <![CDATA[<p>This episode introduces Task 15 by teaching how to investigate AI security incidents through fast, disciplined evidence collection, because AAISM expects you to prioritize what preserves truth and supports defensible decisions before focusing on attribution or deeper analysis. You’ll define the evidence categories that matter for AI incidents, including access and authentication logs, prompt and output records where permitted, model and configuration versions, data source and plugin activity, change management history, and monitoring alerts that show timeline and impact. We’ll walk through a scenario where sensitive data appears in an AI-generated response, and you’ll practice building an investigation timeline, identifying likely exposure paths such as prompt leakage or overly broad data connectors, and determining what to secure immediately to prevent evidence loss. Best practices include chain-of-custody discipline, documenting assumptions, and using structured triage questions so you can rapidly separate misuse, misconfiguration, and system faults under time pressure. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode introduces Task 15 by teaching how to investigate AI security incidents through fast, disciplined evidence collection, because AAISM expects you to prioritize what preserves truth and supports defensible decisions before focusing on attribution or deeper analysis. You’ll define the evidence categories that matter for AI incidents, including access and authentication logs, prompt and output records where permitted, model and configuration versions, data source and plugin activity, change management history, and monitoring alerts that show timeline and impact. We’ll walk through a scenario where sensitive data appears in an AI-generated response, and you’ll practice building an investigation timeline, identifying likely exposure paths such as prompt leakage or overly broad data connectors, and determining what to secure immediately to prevent evidence loss. Best practices include chain-of-custody discipline, documenting assumptions, and using structured triage questions so you can rapidly separate misuse, misconfiguration, and system faults under time pressure. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:40:50 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/70d4ddf5/fd8bccc2.mp3" length="39918054" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>997</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode introduces Task 15 by teaching how to investigate AI security incidents through fast, disciplined evidence collection, because AAISM expects you to prioritize what preserves truth and supports defensible decisions before focusing on attribution or deeper analysis. You’ll define the evidence categories that matter for AI incidents, including access and authentication logs, prompt and output records where permitted, model and configuration versions, data source and plugin activity, change management history, and monitoring alerts that show timeline and impact. We’ll walk through a scenario where sensitive data appears in an AI-generated response, and you’ll practice building an investigation timeline, identifying likely exposure paths such as prompt leakage or overly broad data connectors, and determining what to secure immediately to prevent evidence loss. Best practices include chain-of-custody discipline, documenting assumptions, and using structured triage questions so you can rapidly separate misuse, misconfiguration, and system faults under time pressure. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/70d4ddf5/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 38 — Document AI incidents clearly for regulators, contracts, and executive updates (Task 15)</title>
      <itunes:episode>38</itunes:episode>
      <podcast:episode>38</podcast:episode>
      <itunes:title>Episode 38 — Document AI incidents clearly for regulators, contracts, and executive updates (Task 15)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">07fc4d5b-4278-4faf-88cc-b0d59647ab80</guid>
      <link>https://share.transistor.fm/s/c7acda99</link>
      <description>
        <![CDATA[<p>This episode focuses on Task 15 by explaining how to document AI incidents in a way that serves regulators, contracts, and executive stakeholders, because AAISM commonly tests whether you can turn technical facts into clear, auditable records without speculation or missing context. You’ll learn how to write incident documentation that captures what happened, what systems and data were affected, what controls failed or were bypassed, what containment actions were taken, and what evidence supports each statement, while keeping sensitive details appropriately controlled. We’ll use a scenario involving a third-party model service where prompt history retention creates unexpected exposure, and you’ll practice documenting the timeline, decision points, and vendor coordination steps so the record can stand up to external scrutiny. Troubleshooting covers typical documentation failures like mixing hypotheses with facts, omitting scope boundaries, failing to record approvals and communications, and not linking incident findings back to governance changes and control improvements. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on Task 15 by explaining how to document AI incidents in a way that serves regulators, contracts, and executive stakeholders, because AAISM commonly tests whether you can turn technical facts into clear, auditable records without speculation or missing context. You’ll learn how to write incident documentation that captures what happened, what systems and data were affected, what controls failed or were bypassed, what containment actions were taken, and what evidence supports each statement, while keeping sensitive details appropriately controlled. We’ll use a scenario involving a third-party model service where prompt history retention creates unexpected exposure, and you’ll practice documenting the timeline, decision points, and vendor coordination steps so the record can stand up to external scrutiny. Troubleshooting covers typical documentation failures like mixing hypotheses with facts, omitting scope boundaries, failing to record approvals and communications, and not linking incident findings back to governance changes and control improvements. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:41:08 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/c7acda99/18a4f770.mp3" length="38230557" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>955</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on Task 15 by explaining how to document AI incidents in a way that serves regulators, contracts, and executive stakeholders, because AAISM commonly tests whether you can turn technical facts into clear, auditable records without speculation or missing context. You’ll learn how to write incident documentation that captures what happened, what systems and data were affected, what controls failed or were bypassed, what containment actions were taken, and what evidence supports each statement, while keeping sensitive details appropriately controlled. We’ll use a scenario involving a third-party model service where prompt history retention creates unexpected exposure, and you’ll practice documenting the timeline, decision points, and vendor coordination steps so the record can stand up to external scrutiny. Troubleshooting covers typical documentation failures like mixing hypotheses with facts, omitting scope boundaries, failing to record approvals and communications, and not linking incident findings back to governance changes and control improvements. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/c7acda99/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 39 — Report AI security incidents on time without losing accuracy (Task 15)</title>
      <itunes:episode>39</itunes:episode>
      <podcast:episode>39</podcast:episode>
      <itunes:title>Episode 39 — Report AI security incidents on time without losing accuracy (Task 15)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0dbb97c3-f9ea-450e-a525-c6e73dcc1835</guid>
      <link>https://share.transistor.fm/s/a43622d0</link>
      <description>
        <![CDATA[<p>This episode teaches how to report AI security incidents on time while maintaining accuracy, aligning with Task 15 and reflecting how AAISM balances speed, governance, and evidence when deadlines are driven by regulation, contracts, or internal escalation policies. You’ll learn how to manage reporting with incomplete information by clearly separating confirmed facts from open questions, defining what “initial notification” must include, and setting expectations for follow-up updates as investigation progresses. We’ll work through a scenario where a suspicious data access pattern suggests possible prompt exfiltration, and you’ll practice deciding when to notify legal, privacy, and leadership, how to coordinate with a vendor without losing control of messaging, and how to ensure that rapid reporting does not introduce contradictions that damage credibility later. Best practices include predefined reporting templates, approval pathways, and a communication cadence that matches governance routines, so the organization meets obligations without guessing. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches how to report AI security incidents on time while maintaining accuracy, aligning with Task 15 and reflecting how AAISM balances speed, governance, and evidence when deadlines are driven by regulation, contracts, or internal escalation policies. You’ll learn how to manage reporting with incomplete information by clearly separating confirmed facts from open questions, defining what “initial notification” must include, and setting expectations for follow-up updates as investigation progresses. We’ll work through a scenario where a suspicious data access pattern suggests possible prompt exfiltration, and you’ll practice deciding when to notify legal, privacy, and leadership, how to coordinate with a vendor without losing control of messaging, and how to ensure that rapid reporting does not introduce contradictions that damage credibility later. Best practices include predefined reporting templates, approval pathways, and a communication cadence that matches governance routines, so the organization meets obligations without guessing. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:41:21 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/a43622d0/02f70be9.mp3" length="38097819" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>952</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches how to report AI security incidents on time while maintaining accuracy, aligning with Task 15 and reflecting how AAISM balances speed, governance, and evidence when deadlines are driven by regulation, contracts, or internal escalation policies. You’ll learn how to manage reporting with incomplete information by clearly separating confirmed facts from open questions, defining what “initial notification” must include, and setting expectations for follow-up updates as investigation progresses. We’ll work through a scenario where a suspicious data access pattern suggests possible prompt exfiltration, and you’ll practice deciding when to notify legal, privacy, and leadership, how to coordinate with a vendor without losing control of messaging, and how to ensure that rapid reporting does not introduce contradictions that damage credibility later. Best practices include predefined reporting templates, approval pathways, and a communication cadence that matches governance routines, so the organization meets obligations without guessing. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/a43622d0/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 40 — Contain AI incidents quickly by limiting access and stopping risky flows (Task 16)</title>
      <itunes:episode>40</itunes:episode>
      <podcast:episode>40</podcast:episode>
      <itunes:title>Episode 40 — Contain AI incidents quickly by limiting access and stopping risky flows (Task 16)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e5b05d47-ed3a-4998-8d3a-902b69864de1</guid>
      <link>https://share.transistor.fm/s/28aaaa67</link>
      <description>
        <![CDATA[<p>This episode introduces Task 16 by focusing on rapid containment actions for AI incidents, because AAISM questions often test whether you can stop harm first by limiting access and risky data flows while preserving evidence and keeping governance decision rights intact. You’ll define containment for AI contexts, including disabling compromised accounts, revoking or narrowing plugin and connector permissions, pausing data ingestion or retraining pipelines, rolling back risky configuration changes, and placing guardrails on outputs when safety or leakage risk is elevated. We’ll use a scenario where an internal assistant is suspected of exposing confidential documents through an overly broad search connector, and you’ll practice the containment sequence: isolate access paths, validate scope, coordinate approvals, and document actions so containment is defensible and reversible. Troubleshooting covers common pitfalls like shutting down logging, overcorrecting without understanding dependencies, and failing to communicate containment status to stakeholders who must make risk decisions during recovery. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode introduces Task 16 by focusing on rapid containment actions for AI incidents, because AAISM questions often test whether you can stop harm first by limiting access and risky data flows while preserving evidence and keeping governance decision rights intact. You’ll define containment for AI contexts, including disabling compromised accounts, revoking or narrowing plugin and connector permissions, pausing data ingestion or retraining pipelines, rolling back risky configuration changes, and placing guardrails on outputs when safety or leakage risk is elevated. We’ll use a scenario where an internal assistant is suspected of exposing confidential documents through an overly broad search connector, and you’ll practice the containment sequence: isolate access paths, validate scope, coordinate approvals, and document actions so containment is defensible and reversible. Troubleshooting covers common pitfalls like shutting down logging, overcorrecting without understanding dependencies, and failing to communicate containment status to stakeholders who must make risk decisions during recovery. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:41:38 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/28aaaa67/6f6b8cb0.mp3" length="43101860" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1077</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode introduces Task 16 by focusing on rapid containment actions for AI incidents, because AAISM questions often test whether you can stop harm first by limiting access and risky data flows while preserving evidence and keeping governance decision rights intact. You’ll define containment for AI contexts, including disabling compromised accounts, revoking or narrowing plugin and connector permissions, pausing data ingestion or retraining pipelines, rolling back risky configuration changes, and placing guardrails on outputs when safety or leakage risk is elevated. We’ll use a scenario where an internal assistant is suspected of exposing confidential documents through an overly broad search connector, and you’ll practice the containment sequence: isolate access paths, validate scope, coordinate approvals, and document actions so containment is defensible and reversible. Troubleshooting covers common pitfalls like shutting down logging, overcorrecting without understanding dependencies, and failing to communicate containment status to stakeholders who must make risk decisions during recovery. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/28aaaa67/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 41 — Notify and escalate during AI incidents with the right triggers (Task 16)</title>
      <itunes:episode>41</itunes:episode>
      <podcast:episode>41</podcast:episode>
      <itunes:title>Episode 41 — Notify and escalate during AI incidents with the right triggers (Task 16)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2d18df66-d07a-4523-8be9-66b65df1cc3b</guid>
      <link>https://share.transistor.fm/s/4279825f</link>
      <description>
        <![CDATA[<p>This episode covers Task 16 by teaching how to notify and escalate during AI incidents using the right triggers, because AAISM often tests whether you can recognize when an AI issue crosses the threshold from “operational anomaly” to “security incident” that requires formal governance, legal, privacy, or executive involvement. You’ll define escalation triggers such as confirmed or suspected sensitive data exposure through prompts or outputs, unauthorized access to model endpoints or connected data sources, evidence of prompt injection or jailbreak attempts at scale, unexpected vendor behavior that impacts confidentiality, and safety failures that create harm or regulatory risk. We’ll use a scenario where a customer chatbot begins revealing internal ticket summaries, and you’ll practice deciding who must be notified first, what facts must be confirmed before broad communication, and how to preserve evidence while reducing ongoing impact. Best practices include predefined severity thresholds, clear ownership pathways, and a cadence for updates that stays accurate as new evidence emerges. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode covers Task 16 by teaching how to notify and escalate during AI incidents using the right triggers, because AAISM often tests whether you can recognize when an AI issue crosses the threshold from “operational anomaly” to “security incident” that requires formal governance, legal, privacy, or executive involvement. You’ll define escalation triggers such as confirmed or suspected sensitive data exposure through prompts or outputs, unauthorized access to model endpoints or connected data sources, evidence of prompt injection or jailbreak attempts at scale, unexpected vendor behavior that impacts confidentiality, and safety failures that create harm or regulatory risk. We’ll use a scenario where a customer chatbot begins revealing internal ticket summaries, and you’ll practice deciding who must be notified first, what facts must be confirmed before broad communication, and how to preserve evidence while reducing ongoing impact. Best practices include predefined severity thresholds, clear ownership pathways, and a cadence for updates that stays accurate as new evidence emerges. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:41:51 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/4279825f/e80974de.mp3" length="38099915" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>952</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode covers Task 16 by teaching how to notify and escalate during AI incidents using the right triggers, because AAISM often tests whether you can recognize when an AI issue crosses the threshold from “operational anomaly” to “security incident” that requires formal governance, legal, privacy, or executive involvement. You’ll define escalation triggers such as confirmed or suspected sensitive data exposure through prompts or outputs, unauthorized access to model endpoints or connected data sources, evidence of prompt injection or jailbreak attempts at scale, unexpected vendor behavior that impacts confidentiality, and safety failures that create harm or regulatory risk. We’ll use a scenario where a customer chatbot begins revealing internal ticket summaries, and you’ll practice deciding who must be notified first, what facts must be confirmed before broad communication, and how to preserve evidence while reducing ongoing impact. Best practices include predefined severity thresholds, clear ownership pathways, and a cadence for updates that stays accurate as new evidence emerges. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/4279825f/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 42 — Eradicate root causes and recover safely after AI security incidents (Task 16)</title>
      <itunes:episode>42</itunes:episode>
      <podcast:episode>42</podcast:episode>
      <itunes:title>Episode 42 — Eradicate root causes and recover safely after AI security incidents (Task 16)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a21f4bc0-87af-4cf0-9397-ec1c25df8f61</guid>
      <link>https://share.transistor.fm/s/5024f21c</link>
      <description>
        <![CDATA[<p>This episode focuses on Task 16 by explaining how to eradicate root causes and recover safely after an AI security incident, because AAISM expects you to move beyond containment into durable fixes that prevent recurrence while maintaining evidence and governance discipline. You’ll learn how to distinguish symptom fixes, like disabling a feature, from root-cause eradication actions, like correcting overbroad connector permissions, closing misconfigured logging paths, removing poisoned data from pipelines, rotating credentials, and tightening change control for model updates and prompt templates. We’ll walk through a scenario where an internal assistant was exploited through a prompt injection path that caused it to query sensitive repositories, and you’ll practice selecting recovery steps that restore service in a controlled way, validate the system’s behavior under test conditions, and document the decisions and approvals that justify returning to normal operations. Troubleshooting emphasizes avoiding rushed re-enablement, incomplete access cleanup, and “silent” vendor changes that reintroduce exposure. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on Task 16 by explaining how to eradicate root causes and recover safely after an AI security incident, because AAISM expects you to move beyond containment into durable fixes that prevent recurrence while maintaining evidence and governance discipline. You’ll learn how to distinguish symptom fixes, like disabling a feature, from root-cause eradication actions, like correcting overbroad connector permissions, closing misconfigured logging paths, removing poisoned data from pipelines, rotating credentials, and tightening change control for model updates and prompt templates. We’ll walk through a scenario where an internal assistant was exploited through a prompt injection path that caused it to query sensitive repositories, and you’ll practice selecting recovery steps that restore service in a controlled way, validate the system’s behavior under test conditions, and document the decisions and approvals that justify returning to normal operations. Troubleshooting emphasizes avoiding rushed re-enablement, incomplete access cleanup, and “silent” vendor changes that reintroduce exposure. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:42:03 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/5024f21c/03ee2d2b.mp3" length="34137672" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>853</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on Task 16 by explaining how to eradicate root causes and recover safely after an AI security incident, because AAISM expects you to move beyond containment into durable fixes that prevent recurrence while maintaining evidence and governance discipline. You’ll learn how to distinguish symptom fixes, like disabling a feature, from root-cause eradication actions, like correcting overbroad connector permissions, closing misconfigured logging paths, removing poisoned data from pipelines, rotating credentials, and tightening change control for model updates and prompt templates. We’ll walk through a scenario where an internal assistant was exploited through a prompt injection path that caused it to query sensitive repositories, and you’ll practice selecting recovery steps that restore service in a controlled way, validate the system’s behavior under test conditions, and document the decisions and approvals that justify returning to normal operations. Troubleshooting emphasizes avoiding rushed re-enablement, incomplete access cleanup, and “silent” vendor changes that reintroduce exposure. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/5024f21c/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 43 — Add AI systems to business continuity plans without hidden weak points (Task 17)</title>
      <itunes:episode>43</itunes:episode>
      <podcast:episode>43</podcast:episode>
      <itunes:title>Episode 43 — Add AI systems to business continuity plans without hidden weak points (Task 17)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">959a165f-114b-4f8f-8043-f02dfb154c68</guid>
      <link>https://share.transistor.fm/s/afa43e86</link>
      <description>
        <![CDATA[<p>This episode addresses Task 17 by teaching how to add AI systems to business continuity plans without hidden weak points, because AAISM tests whether you can treat AI services as real dependencies with failure modes, not optional features that can be ignored during outages. You’ll define what business continuity means for AI-enabled processes by identifying which business functions rely on inference services, data pipelines, model hosting, identity providers, and third-party connectors, then mapping how those dependencies fail and what “acceptable” degraded operation looks like. We’ll use a scenario where a support organization relies on an AI assistant for case triage and knowledge retrieval, and you’ll practice planning continuity controls such as fallback workflows, manual validation gates, alternate data access methods, and communication plans that keep service safe when AI is unavailable or unreliable. Exam-wise, you’ll learn to choose answers that formalize continuity ownership, testing, and documentation, rather than relying on informal “we’ll handle it” assumptions. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode addresses Task 17 by teaching how to add AI systems to business continuity plans without hidden weak points, because AAISM tests whether you can treat AI services as real dependencies with failure modes, not optional features that can be ignored during outages. You’ll define what business continuity means for AI-enabled processes by identifying which business functions rely on inference services, data pipelines, model hosting, identity providers, and third-party connectors, then mapping how those dependencies fail and what “acceptable” degraded operation looks like. We’ll use a scenario where a support organization relies on an AI assistant for case triage and knowledge retrieval, and you’ll practice planning continuity controls such as fallback workflows, manual validation gates, alternate data access methods, and communication plans that keep service safe when AI is unavailable or unreliable. Exam-wise, you’ll learn to choose answers that formalize continuity ownership, testing, and documentation, rather than relying on informal “we’ll handle it” assumptions. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:42:21 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/afa43e86/1427bc6f.mp3" length="38255619" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>956</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode addresses Task 17 by teaching how to add AI systems to business continuity plans without hidden weak points, because AAISM tests whether you can treat AI services as real dependencies with failure modes, not optional features that can be ignored during outages. You’ll define what business continuity means for AI-enabled processes by identifying which business functions rely on inference services, data pipelines, model hosting, identity providers, and third-party connectors, then mapping how those dependencies fail and what “acceptable” degraded operation looks like. We’ll use a scenario where a support organization relies on an AI assistant for case triage and knowledge retrieval, and you’ll practice planning continuity controls such as fallback workflows, manual validation gates, alternate data access methods, and communication plans that keep service safe when AI is unavailable or unreliable. Exam-wise, you’ll learn to choose answers that formalize continuity ownership, testing, and documentation, rather than relying on informal “we’ll handle it” assumptions. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/afa43e86/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 44 — Set recovery goals for AI services, data pipelines, and vendors (Task 17)</title>
      <itunes:episode>44</itunes:episode>
      <podcast:episode>44</podcast:episode>
      <itunes:title>Episode 44 — Set recovery goals for AI services, data pipelines, and vendors (Task 17)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">005d5ee3-28ac-49d7-9b5e-50881146b4ab</guid>
      <link>https://share.transistor.fm/s/4d786347</link>
      <description>
        <![CDATA[<p>This episode focuses on Task 17 by showing how to set recovery goals for AI services, data pipelines, and vendors, because AAISM expects you to define recovery in measurable terms that match business impact and risk tolerance instead of using vague “restore ASAP” language. You’ll learn how to express recovery goals through service priorities, maximum tolerable downtime, data freshness expectations, acceptable loss windows for pipeline states, and clear responsibilities when recovery depends on third-party platforms. We’ll walk through a scenario where an AI feature depends on a vendor-hosted model plus internal retrieval infrastructure, and you’ll practice setting goals that account for both components, including what must be restored first, what can remain degraded, and what safety controls must be verified before returning to full service. Troubleshooting covers common gaps like ignoring downstream business processes, assuming vendor SLAs cover your specific use case, and failing to test recovery goals against real operational constraints and change windows. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on Task 17 by showing how to set recovery goals for AI services, data pipelines, and vendors, because AAISM expects you to define recovery in measurable terms that match business impact and risk tolerance instead of using vague “restore ASAP” language. You’ll learn how to express recovery goals through service priorities, maximum tolerable downtime, data freshness expectations, acceptable loss windows for pipeline states, and clear responsibilities when recovery depends on third-party platforms. We’ll walk through a scenario where an AI feature depends on a vendor-hosted model plus internal retrieval infrastructure, and you’ll practice setting goals that account for both components, including what must be restored first, what can remain degraded, and what safety controls must be verified before returning to full service. Troubleshooting covers common gaps like ignoring downstream business processes, assuming vendor SLAs cover your specific use case, and failing to test recovery goals against real operational constraints and change windows. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:42:38 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/4d786347/1e2ab410.mp3" length="35475132" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>886</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on Task 17 by showing how to set recovery goals for AI services, data pipelines, and vendors, because AAISM expects you to define recovery in measurable terms that match business impact and risk tolerance instead of using vague “restore ASAP” language. You’ll learn how to express recovery goals through service priorities, maximum tolerable downtime, data freshness expectations, acceptable loss windows for pipeline states, and clear responsibilities when recovery depends on third-party platforms. We’ll walk through a scenario where an AI feature depends on a vendor-hosted model plus internal retrieval infrastructure, and you’ll practice setting goals that account for both components, including what must be restored first, what can remain degraded, and what safety controls must be verified before returning to full service. Troubleshooting covers common gaps like ignoring downstream business processes, assuming vendor SLAs cover your specific use case, and failing to test recovery goals against real operational constraints and change windows. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/4d786347/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 45 — Plan for vendor outages and safe degraded modes in AI systems (Task 17)</title>
      <itunes:episode>45</itunes:episode>
      <podcast:episode>45</podcast:episode>
      <itunes:title>Episode 45 — Plan for vendor outages and safe degraded modes in AI systems (Task 17)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">998debbf-5a75-4801-a47a-046fcef92a1f</guid>
      <link>https://share.transistor.fm/s/cd2be30a</link>
      <description>
        <![CDATA[<p>This episode covers Task 17 by teaching how to plan for vendor outages and safe degraded modes, because many AI deployments depend on external model services, and AAISM scenarios often hinge on whether you can keep operations safe when a vendor fails or changes behavior unexpectedly. You’ll define “safe degraded mode” as an intentionally designed fallback that reduces functionality while preserving confidentiality, integrity, and acceptable decision quality, such as disabling automated actions, restricting sensitive queries, switching to cached content, or routing to human review. We’ll use a scenario where a vendor model endpoint becomes unavailable during peak usage, and you’ll practice deciding what the system should do, how to communicate limitations to users, and how to prevent risky behavior like bypassing guardrails or logging sensitive content to uncontrolled locations during troubleshooting. Best practices include pre-approved failover decisions, vendor dependency mapping, periodic testing, and documented criteria for returning from degraded to normal operation. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode covers Task 17 by teaching how to plan for vendor outages and safe degraded modes, because many AI deployments depend on external model services, and AAISM scenarios often hinge on whether you can keep operations safe when a vendor fails or changes behavior unexpectedly. You’ll define “safe degraded mode” as an intentionally designed fallback that reduces functionality while preserving confidentiality, integrity, and acceptable decision quality, such as disabling automated actions, restricting sensitive queries, switching to cached content, or routing to human review. We’ll use a scenario where a vendor model endpoint becomes unavailable during peak usage, and you’ll practice deciding what the system should do, how to communicate limitations to users, and how to prevent risky behavior like bypassing guardrails or logging sensitive content to uncontrolled locations during troubleshooting. Best practices include pre-approved failover decisions, vendor dependency mapping, periodic testing, and documented criteria for returning from degraded to normal operation. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:42:57 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/cd2be30a/c26458de.mp3" length="33277707" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>831</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode covers Task 17 by teaching how to plan for vendor outages and safe degraded modes, because many AI deployments depend on external model services, and AAISM scenarios often hinge on whether you can keep operations safe when a vendor fails or changes behavior unexpectedly. You’ll define “safe degraded mode” as an intentionally designed fallback that reduces functionality while preserving confidentiality, integrity, and acceptable decision quality, such as disabling automated actions, restricting sensitive queries, switching to cached content, or routing to human review. We’ll use a scenario where a vendor model endpoint becomes unavailable during peak usage, and you’ll practice deciding what the system should do, how to communicate limitations to users, and how to prevent risky behavior like bypassing guardrails or logging sensitive content to uncontrolled locations during troubleshooting. Best practices include pre-approved failover decisions, vendor dependency mapping, periodic testing, and documented criteria for returning from degraded to normal operation. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/cd2be30a/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 46 — Domain 1 recap drill: pick the right task under pressure (Tasks 1–21)</title>
      <itunes:episode>46</itunes:episode>
      <podcast:episode>46</podcast:episode>
      <itunes:title>Episode 46 — Domain 1 recap drill: pick the right task under pressure (Tasks 1–21)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c1b52701-b5ac-4bef-bb5a-652a2fc5a812</guid>
      <link>https://share.transistor.fm/s/71c95d00</link>
      <description>
        <![CDATA[<p>This episode is a Domain 1 recap drill designed to improve speed and accuracy under pressure by training you to identify which task is being tested and what the most defensible next step looks like across Tasks 1–21. You’ll rehearse fast classification of scenarios into governance and program management, policy and procedure operationalization, framework and ethics alignment, impact assessment discipline, inventory and data risk controls, acceptable use and training, and program integration with tools and metrics. We’ll use short, realistic prompts—like “a team wants to fine-tune on customer data,” “a vendor updates a model,” or “an assistant output contains sensitive content”—and you’ll practice deciding whether the best answer is to establish ownership, update policy artifacts, re-run assessments, adjust monitoring, or escalate and document. The goal is to build a repeatable mental checklist that prevents you from over-indexing on technical fixes when the exam is testing governance, evidence, and accountability. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode is a Domain 1 recap drill designed to improve speed and accuracy under pressure by training you to identify which task is being tested and what the most defensible next step looks like across Tasks 1–21. You’ll rehearse fast classification of scenarios into governance and program management, policy and procedure operationalization, framework and ethics alignment, impact assessment discipline, inventory and data risk controls, acceptable use and training, and program integration with tools and metrics. We’ll use short, realistic prompts—like “a team wants to fine-tune on customer data,” “a vendor updates a model,” or “an assistant output contains sensitive content”—and you’ll practice deciding whether the best answer is to establish ownership, update policy artifacts, re-run assessments, adjust monitoring, or escalate and document. The goal is to build a repeatable mental checklist that prevents you from over-indexing on technical fixes when the exam is testing governance, evidence, and accountability. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:43:17 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/71c95d00/87d353a0.mp3" length="35513785" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>887</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode is a Domain 1 recap drill designed to improve speed and accuracy under pressure by training you to identify which task is being tested and what the most defensible next step looks like across Tasks 1–21. You’ll rehearse fast classification of scenarios into governance and program management, policy and procedure operationalization, framework and ethics alignment, impact assessment discipline, inventory and data risk controls, acceptable use and training, and program integration with tools and metrics. We’ll use short, realistic prompts—like “a team wants to fine-tune on customer data,” “a vendor updates a model,” or “an assistant output contains sensitive content”—and you’ll practice deciding whether the best answer is to establish ownership, update policy artifacts, re-run assessments, adjust monitoring, or escalate and document. The goal is to build a repeatable mental checklist that prevents you from over-indexing on technical fixes when the exam is testing governance, evidence, and accountability. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/71c95d00/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 47 — Domain 2 overview: manage AI risk while enabling business opportunity (Task 4)</title>
      <itunes:episode>47</itunes:episode>
      <podcast:episode>47</podcast:episode>
      <itunes:title>Episode 47 — Domain 2 overview: manage AI risk while enabling business opportunity (Task 4)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">49495323-5d94-4819-a85c-f255a0d64f6e</guid>
      <link>https://share.transistor.fm/s/9e923369</link>
      <description>
        <![CDATA[<p>This episode introduces Domain 2 through Task 4 by explaining how AAISM expects you to manage AI risk while enabling business opportunity, which means balancing innovation with disciplined risk management that leadership can defend. You’ll define AI risk in practical terms—uncertainty that impacts confidentiality, integrity, availability, safety, compliance, and reputation—then connect that to a risk management approach that starts at intake, continues through lifecycle controls, and remains measurable through monitoring and reporting. We’ll use a scenario where the business wants rapid rollout of an AI assistant across multiple departments, and you’ll practice deciding how to structure risk workflows so high-impact use cases get deeper review, approvals are explicit, and controls are proportional rather than blocking everything. Exam-wise, you’ll learn to prefer answers that create repeatable risk processes, clear ownership, and evidence-backed decisions, not ad hoc approvals driven by urgency. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode introduces Domain 2 through Task 4 by explaining how AAISM expects you to manage AI risk while enabling business opportunity, which means balancing innovation with disciplined risk management that leadership can defend. You’ll define AI risk in practical terms—uncertainty that impacts confidentiality, integrity, availability, safety, compliance, and reputation—then connect that to a risk management approach that starts at intake, continues through lifecycle controls, and remains measurable through monitoring and reporting. We’ll use a scenario where the business wants rapid rollout of an AI assistant across multiple departments, and you’ll practice deciding how to structure risk workflows so high-impact use cases get deeper review, approvals are explicit, and controls are proportional rather than blocking everything. Exam-wise, you’ll learn to prefer answers that create repeatable risk processes, clear ownership, and evidence-backed decisions, not ad hoc approvals driven by urgency. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:43:40 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/9e923369/b0995c8d.mp3" length="34333068" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>857</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode introduces Domain 2 through Task 4 by explaining how AAISM expects you to manage AI risk while enabling business opportunity, which means balancing innovation with disciplined risk management that leadership can defend. You’ll define AI risk in practical terms—uncertainty that impacts confidentiality, integrity, availability, safety, compliance, and reputation—then connect that to a risk management approach that starts at intake, continues through lifecycle controls, and remains measurable through monitoring and reporting. We’ll use a scenario where the business wants rapid rollout of an AI assistant across multiple departments, and you’ll practice deciding how to structure risk workflows so high-impact use cases get deeper review, approvals are explicit, and controls are proportional rather than blocking everything. Exam-wise, you’ll learn to prefer answers that create repeatable risk processes, clear ownership, and evidence-backed decisions, not ad hoc approvals driven by urgency. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/9e923369/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 48 — Run the AI risk management life cycle from intake to monitoring (Task 4)</title>
      <itunes:episode>48</itunes:episode>
      <podcast:episode>48</podcast:episode>
      <itunes:title>Episode 48 — Run the AI risk management life cycle from intake to monitoring (Task 4)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d683f227-71c2-4133-b47d-fd7724b34112</guid>
      <link>https://share.transistor.fm/s/9ddb8532</link>
      <description>
        <![CDATA[<p>This episode teaches Task 4 by walking through the AI risk management life cycle from intake to monitoring, because AAISM questions often test whether you can apply risk management as a continuous loop rather than a single assessment document. You’ll define the lifecycle stages as intake and scope definition, risk identification, analysis and prioritization, treatment selection, control implementation, acceptance or escalation, and ongoing monitoring with feedback into governance. We’ll use a scenario where a team proposes a new retrieval-augmented assistant that connects to sensitive repositories, and you’ll practice identifying risk sources like access breadth, prompt leakage, output misuse, vendor logging, and change drift, then selecting treatment options that are measurable and owned. Troubleshooting emphasizes where organizations fail: skipping intake discipline, treating assessments as “check the box,” ignoring model and data changes that invalidate prior decisions, and monitoring without thresholds or response playbooks. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches Task 4 by walking through the AI risk management life cycle from intake to monitoring, because AAISM questions often test whether you can apply risk management as a continuous loop rather than a single assessment document. You’ll define the lifecycle stages as intake and scope definition, risk identification, analysis and prioritization, treatment selection, control implementation, acceptance or escalation, and ongoing monitoring with feedback into governance. We’ll use a scenario where a team proposes a new retrieval-augmented assistant that connects to sensitive repositories, and you’ll practice identifying risk sources like access breadth, prompt leakage, output misuse, vendor logging, and change drift, then selecting treatment options that are measurable and owned. Troubleshooting emphasizes where organizations fail: skipping intake discipline, treating assessments as “check the box,” ignoring model and data changes that invalidate prior decisions, and monitoring without thresholds or response playbooks. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:43:53 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/9ddb8532/b20067c2.mp3" length="33576550" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>839</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches Task 4 by walking through the AI risk management life cycle from intake to monitoring, because AAISM questions often test whether you can apply risk management as a continuous loop rather than a single assessment document. You’ll define the lifecycle stages as intake and scope definition, risk identification, analysis and prioritization, treatment selection, control implementation, acceptance or escalation, and ongoing monitoring with feedback into governance. We’ll use a scenario where a team proposes a new retrieval-augmented assistant that connects to sensitive repositories, and you’ll practice identifying risk sources like access breadth, prompt leakage, output misuse, vendor logging, and change drift, then selecting treatment options that are measurable and owned. Troubleshooting emphasizes where organizations fail: skipping intake discipline, treating assessments as “check the box,” ignoring model and data changes that invalidate prior decisions, and monitoring without thresholds or response playbooks. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/9ddb8532/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 49 — Connect AI risks to enterprise risk reporting and decision-making (Task 4)</title>
      <itunes:episode>49</itunes:episode>
      <podcast:episode>49</podcast:episode>
      <itunes:title>Episode 49 — Connect AI risks to enterprise risk reporting and decision-making (Task 4)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7937f21c-8a5f-4ec0-9cbf-f0c03138a533</guid>
      <link>https://share.transistor.fm/s/64d33126</link>
      <description>
        <![CDATA[<p>This episode focuses on Task 4 by showing how to connect AI risks to enterprise risk reporting and decision-making, because AAISM expects AI risk to be expressed in the same language leaders already use for prioritization, funding, and acceptance decisions. You’ll learn how to translate AI-specific concerns—like prompt injection, model drift, unsafe automation, and vendor dependency—into risk statements that include asset scope, threat or failure mode, business impact, likelihood drivers, current controls, residual risk, and clear ownership. We’ll walk through a scenario where a regulated business unit wants to use AI for customer interactions, and you’ll practice framing the risk so leadership can decide whether to proceed, what controls must be added, and what residual risk is acceptable. Best practices include consistent severity scales, mapping to enterprise risk categories, and documenting acceptance decisions with evidence so later audits or incidents don’t reveal hidden assumptions. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on Task 4 by showing how to connect AI risks to enterprise risk reporting and decision-making, because AAISM expects AI risk to be expressed in the same language leaders already use for prioritization, funding, and acceptance decisions. You’ll learn how to translate AI-specific concerns—like prompt injection, model drift, unsafe automation, and vendor dependency—into risk statements that include asset scope, threat or failure mode, business impact, likelihood drivers, current controls, residual risk, and clear ownership. We’ll walk through a scenario where a regulated business unit wants to use AI for customer interactions, and you’ll practice framing the risk so leadership can decide whether to proceed, what controls must be added, and what residual risk is acceptable. Best practices include consistent severity scales, mapping to enterprise risk categories, and documenting acceptance decisions with evidence so later audits or incidents don’t reveal hidden assumptions. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:44:12 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/64d33126/730f6475.mp3" length="31923525" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>797</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on Task 4 by showing how to connect AI risks to enterprise risk reporting and decision-making, because AAISM expects AI risk to be expressed in the same language leaders already use for prioritization, funding, and acceptance decisions. You’ll learn how to translate AI-specific concerns—like prompt injection, model drift, unsafe automation, and vendor dependency—into risk statements that include asset scope, threat or failure mode, business impact, likelihood drivers, current controls, residual risk, and clear ownership. We’ll walk through a scenario where a regulated business unit wants to use AI for customer interactions, and you’ll practice framing the risk so leadership can decide whether to proceed, what controls must be added, and what residual risk is acceptable. Best practices include consistent severity scales, mapping to enterprise risk categories, and documenting acceptance decisions with evidence so later audits or incidents don’t reveal hidden assumptions. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/64d33126/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 50 — Assign AI risk owners and approvals so accountability is never unclear (Task 4)</title>
      <itunes:episode>50</itunes:episode>
      <podcast:episode>50</podcast:episode>
      <itunes:title>Episode 50 — Assign AI risk owners and approvals so accountability is never unclear (Task 4)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b045b1fb-72ea-41e4-ba17-aaa62096ee1f</guid>
      <link>https://share.transistor.fm/s/748241f6</link>
      <description>
        <![CDATA[<p>This episode completes the set by teaching Task 4’s accountability core: assigning AI risk owners and approvals so responsibility is explicit, decisions are traceable, and risk acceptance is intentional rather than accidental. You’ll define what it means to “own” AI risk, including being accountable for controls, monitoring outcomes, exception handling, and lifecycle changes that alter exposure, and you’ll learn how approval pathways should work for new use cases, high-impact deployments, vendor changes, and policy exceptions. We’ll use a scenario where multiple teams share one AI platform, creating blurred lines between platform operators, business owners, and data owners, and you’ll practice building an approval model that clarifies who can accept risk, who must be consulted, and what evidence must be produced before approval is valid. Troubleshooting covers common breakdowns like shared ownership with no decision authority, approvals that ignore data and vendor dependencies, and “temporary” exceptions that never get re-evaluated. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode completes the set by teaching Task 4’s accountability core: assigning AI risk owners and approvals so responsibility is explicit, decisions are traceable, and risk acceptance is intentional rather than accidental. You’ll define what it means to “own” AI risk, including being accountable for controls, monitoring outcomes, exception handling, and lifecycle changes that alter exposure, and you’ll learn how approval pathways should work for new use cases, high-impact deployments, vendor changes, and policy exceptions. We’ll use a scenario where multiple teams share one AI platform, creating blurred lines between platform operators, business owners, and data owners, and you’ll practice building an approval model that clarifies who can accept risk, who must be consulted, and what evidence must be produced before approval is valid. Troubleshooting covers common breakdowns like shared ownership with no decision authority, approvals that ignore data and vendor dependencies, and “temporary” exceptions that never get re-evaluated. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:44:24 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/748241f6/44416249.mp3" length="30352009" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>758</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode completes the set by teaching Task 4’s accountability core: assigning AI risk owners and approvals so responsibility is explicit, decisions are traceable, and risk acceptance is intentional rather than accidental. You’ll define what it means to “own” AI risk, including being accountable for controls, monitoring outcomes, exception handling, and lifecycle changes that alter exposure, and you’ll learn how approval pathways should work for new use cases, high-impact deployments, vendor changes, and policy exceptions. We’ll use a scenario where multiple teams share one AI platform, creating blurred lines between platform operators, business owners, and data owners, and you’ll practice building an approval model that clarifies who can accept risk, who must be consulted, and what evidence must be produced before approval is valid. Troubleshooting covers common breakdowns like shared ownership with no decision authority, approvals that ignore data and vendor dependencies, and “temporary” exceptions that never get re-evaluated. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/748241f6/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 51 — Identify the AI threat landscape using realistic abuse cases (Task 5)</title>
      <itunes:episode>51</itunes:episode>
      <podcast:episode>51</podcast:episode>
      <itunes:title>Episode 51 — Identify the AI threat landscape using realistic abuse cases (Task 5)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">43ccb7ac-6bb0-4a6d-b61a-7c48c1573cb8</guid>
      <link>https://share.transistor.fm/s/e9a520e1</link>
      <description>
        <![CDATA[<p>This episode covers Task 5 by building a practical view of the AI threat landscape using realistic abuse cases, because AAISM expects you to recognize how AI systems can be attacked or misused without relying on vague “AI is risky” statements. You’ll define what a threat landscape means in exam terms: the set of credible threat actors, their objectives, and the tactics that can affect AI confidentiality, integrity, availability, safety, and compliance. We’ll walk through common abuse patterns such as prompt injection that manipulates tool use, data exfiltration through connectors and output channels, model misuse for prohibited content generation, and poisoning risks that degrade reliability over time. You’ll also learn how to describe threats in a structured way that maps to controls and evidence, so you can choose the best-answer response that reduces exposure and supports governance decisions. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode covers Task 5 by building a practical view of the AI threat landscape using realistic abuse cases, because AAISM expects you to recognize how AI systems can be attacked or misused without relying on vague “AI is risky” statements. You’ll define what a threat landscape means in exam terms: the set of credible threat actors, their objectives, and the tactics that can affect AI confidentiality, integrity, availability, safety, and compliance. We’ll walk through common abuse patterns such as prompt injection that manipulates tool use, data exfiltration through connectors and output channels, model misuse for prohibited content generation, and poisoning risks that degrade reliability over time. You’ll also learn how to describe threats in a structured way that maps to controls and evidence, so you can choose the best-answer response that reduces exposure and supports governance decisions. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:44:39 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/e9a520e1/48cbaeee.mp3" length="30874438" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>771</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode covers Task 5 by building a practical view of the AI threat landscape using realistic abuse cases, because AAISM expects you to recognize how AI systems can be attacked or misused without relying on vague “AI is risky” statements. You’ll define what a threat landscape means in exam terms: the set of credible threat actors, their objectives, and the tactics that can affect AI confidentiality, integrity, availability, safety, and compliance. We’ll walk through common abuse patterns such as prompt injection that manipulates tool use, data exfiltration through connectors and output channels, model misuse for prohibited content generation, and poisoning risks that degrade reliability over time. You’ll also learn how to describe threats in a structured way that maps to controls and evidence, so you can choose the best-answer response that reduces exposure and supports governance decisions. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/e9a520e1/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 52 — Assess AI threats by likelihood and impact, not hype and fear (Task 5)</title>
      <itunes:episode>52</itunes:episode>
      <podcast:episode>52</podcast:episode>
      <itunes:title>Episode 52 — Assess AI threats by likelihood and impact, not hype and fear (Task 5)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">acf64c11-c865-407d-9f90-ff2defa2d77e</guid>
      <link>https://share.transistor.fm/s/3d25e4fe</link>
      <description>
        <![CDATA[<p>This episode focuses on Task 5 by teaching you to assess AI threats using likelihood and impact rather than hype, because AAISM questions often include distractors that overreact to novel terminology while ignoring practical risk drivers. You’ll learn how to evaluate whether a threat is credible in your environment by examining access paths, data sensitivity, control strength, user behavior, vendor constraints, and how quickly failures could spread across business processes. We’ll apply this to scenarios like a public-facing chatbot, an internal assistant connected to sensitive repositories, and a model hosted by a third party with limited logging, showing how the same threat can be high or low risk depending on context. Best practices include documenting assumptions, using consistent scoring language that leadership understands, and selecting treatments proportional to risk, so the exam “best answer” is the one that is defensible and operationally achievable. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on Task 5 by teaching you to assess AI threats using likelihood and impact rather than hype, because AAISM questions often include distractors that overreact to novel terminology while ignoring practical risk drivers. You’ll learn how to evaluate whether a threat is credible in your environment by examining access paths, data sensitivity, control strength, user behavior, vendor constraints, and how quickly failures could spread across business processes. We’ll apply this to scenarios like a public-facing chatbot, an internal assistant connected to sensitive repositories, and a model hosted by a third party with limited logging, showing how the same threat can be high or low risk depending on context. Best practices include documenting assumptions, using consistent scoring language that leadership understands, and selecting treatments proportional to risk, so the exam “best answer” is the one that is defensible and operationally achievable. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:44:52 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/3d25e4fe/48c5413e.mp3" length="38368448" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>958</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on Task 5 by teaching you to assess AI threats using likelihood and impact rather than hype, because AAISM questions often include distractors that overreact to novel terminology while ignoring practical risk drivers. You’ll learn how to evaluate whether a threat is credible in your environment by examining access paths, data sensitivity, control strength, user behavior, vendor constraints, and how quickly failures could spread across business processes. We’ll apply this to scenarios like a public-facing chatbot, an internal assistant connected to sensitive repositories, and a model hosted by a third party with limited logging, showing how the same threat can be high or low risk depending on context. Best practices include documenting assumptions, using consistent scoring language that leadership understands, and selecting treatments proportional to risk, so the exam “best answer” is the one that is defensible and operationally achievable. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/3d25e4fe/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 53 — Keep threat understanding current as attackers and tools evolve (Task 5)</title>
      <itunes:episode>53</itunes:episode>
      <podcast:episode>53</podcast:episode>
      <itunes:title>Episode 53 — Keep threat understanding current as attackers and tools evolve (Task 5)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2315127b-3931-490b-b05c-555173536fa0</guid>
      <link>https://share.transistor.fm/s/a95d7849</link>
      <description>
        <![CDATA[<p>This episode addresses Task 5 by showing how to keep threat understanding current as attackers and tools evolve, because AI systems and their integrations change quickly and AAISM expects you to maintain an updated threat view that feeds governance, monitoring, and reassessment decisions. You’ll learn what “current” means operationally: regularly reviewing new abuse patterns, vendor capability changes, newly exposed connectors, shifts in user adoption, and incident learnings that reveal weak assumptions. We’ll walk through a scenario where a vendor adds new tool-calling features and the organization expands access to additional data sources, and you’ll practice identifying how the threat model changes, what controls require updates, and what evidence must be refreshed to remain defensible. Troubleshooting covers organizations that treat threat work as a one-time artifact, leading to stale risk decisions and blind spots that surface during audits or incidents. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode addresses Task 5 by showing how to keep threat understanding current as attackers and tools evolve, because AI systems and their integrations change quickly and AAISM expects you to maintain an updated threat view that feeds governance, monitoring, and reassessment decisions. You’ll learn what “current” means operationally: regularly reviewing new abuse patterns, vendor capability changes, newly exposed connectors, shifts in user adoption, and incident learnings that reveal weak assumptions. We’ll walk through a scenario where a vendor adds new tool-calling features and the organization expands access to additional data sources, and you’ll practice identifying how the threat model changes, what controls require updates, and what evidence must be refreshed to remain defensible. Troubleshooting covers organizations that treat threat work as a one-time artifact, leading to stale risk decisions and blind spots that surface during audits or incidents. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:45:06 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/a95d7849/0daf7dac.mp3" length="43004664" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1074</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode addresses Task 5 by showing how to keep threat understanding current as attackers and tools evolve, because AI systems and their integrations change quickly and AAISM expects you to maintain an updated threat view that feeds governance, monitoring, and reassessment decisions. You’ll learn what “current” means operationally: regularly reviewing new abuse patterns, vendor capability changes, newly exposed connectors, shifts in user adoption, and incident learnings that reveal weak assumptions. We’ll walk through a scenario where a vendor adds new tool-calling features and the organization expands access to additional data sources, and you’ll practice identifying how the threat model changes, what controls require updates, and what evidence must be refreshed to remain defensible. Troubleshooting covers organizations that treat threat work as a one-time artifact, leading to stale risk decisions and blind spots that surface during audits or incidents. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/a95d7849/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 54 — Monitor internal changes that require AI risk reassessment (Task 6)</title>
      <itunes:episode>54</itunes:episode>
      <podcast:episode>54</podcast:episode>
      <itunes:title>Episode 54 — Monitor internal changes that require AI risk reassessment (Task 6)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">72db9033-8091-488f-8887-00cf7d655aa9</guid>
      <link>https://share.transistor.fm/s/24f0408e</link>
      <description>
        <![CDATA[<p>This episode teaches Task 6 by explaining how to monitor internal changes that should trigger AI risk reassessment, because AAISM commonly tests whether you can recognize when a prior approval is no longer valid due to scope, data, or control changes. You’ll define internal change triggers such as expanding the user population, adding new data sources, enabling plugins or connectors, changing prompt templates and guardrails, modifying retention settings, altering access roles, or integrating AI output into automated business decisions. We’ll use a scenario where a team quietly broadens an assistant’s permissions to speed up workflows, and you’ll practice identifying the reassessment points, who must approve updated risk decisions, and which controls and evidence must be revisited before proceeding. Best practices include linking reassessment triggers to change management, keeping inventory and documentation synchronized, and ensuring monitoring outputs can prove when changes occurred and what actions followed. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches Task 6 by explaining how to monitor internal changes that should trigger AI risk reassessment, because AAISM commonly tests whether you can recognize when a prior approval is no longer valid due to scope, data, or control changes. You’ll define internal change triggers such as expanding the user population, adding new data sources, enabling plugins or connectors, changing prompt templates and guardrails, modifying retention settings, altering access roles, or integrating AI output into automated business decisions. We’ll use a scenario where a team quietly broadens an assistant’s permissions to speed up workflows, and you’ll practice identifying the reassessment points, who must approve updated risk decisions, and which controls and evidence must be revisited before proceeding. Best practices include linking reassessment triggers to change management, keeping inventory and documentation synchronized, and ensuring monitoring outputs can prove when changes occurred and what actions followed. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:45:19 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/24f0408e/9e17ddd7.mp3" length="37091577" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>926</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches Task 6 by explaining how to monitor internal changes that should trigger AI risk reassessment, because AAISM commonly tests whether you can recognize when a prior approval is no longer valid due to scope, data, or control changes. You’ll define internal change triggers such as expanding the user population, adding new data sources, enabling plugins or connectors, changing prompt templates and guardrails, modifying retention settings, altering access roles, or integrating AI output into automated business decisions. We’ll use a scenario where a team quietly broadens an assistant’s permissions to speed up workflows, and you’ll practice identifying the reassessment points, who must approve updated risk decisions, and which controls and evidence must be revisited before proceeding. Best practices include linking reassessment triggers to change management, keeping inventory and documentation synchronized, and ensuring monitoring outputs can prove when changes occurred and what actions followed. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/24f0408e/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 55 — Monitor external changes like laws, vendors, and new AI capabilities (Task 6)</title>
      <itunes:episode>55</itunes:episode>
      <podcast:episode>55</podcast:episode>
      <itunes:title>Episode 55 — Monitor external changes like laws, vendors, and new AI capabilities (Task 6)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f5e12560-f3ff-415c-8b01-85a670baf097</guid>
      <link>https://share.transistor.fm/s/a8807797</link>
      <description>
        <![CDATA[<p>This episode focuses on Task 6 by showing how external changes should drive AI risk reassessment, because AAISM expects you to manage risk in a live environment where laws, vendor terms, threat activity, and AI capabilities can shift without your internal teams making any code change. You’ll learn how to track external triggers such as new regulatory requirements, updated contract language, vendor model behavior changes, platform logging or retention changes, and newly disclosed vulnerabilities that affect hosted services or integrations. We’ll walk through a scenario where a vendor announces a major model update and revised data handling practices, and you’ll practice deciding what to reassess first, how to validate the impact on confidentiality and compliance, and how to document decisions so leadership can defend continued use or a pause. Troubleshooting emphasizes avoiding blind trust in vendor statements by requiring evidence, aligning reassessment to governance routines, and updating controls and training when external changes alter real exposure. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on Task 6 by showing how external changes should drive AI risk reassessment, because AAISM expects you to manage risk in a live environment where laws, vendor terms, threat activity, and AI capabilities can shift without your internal teams making any code change. You’ll learn how to track external triggers such as new regulatory requirements, updated contract language, vendor model behavior changes, platform logging or retention changes, and newly disclosed vulnerabilities that affect hosted services or integrations. We’ll walk through a scenario where a vendor announces a major model update and revised data handling practices, and you’ll practice deciding what to reassess first, how to validate the impact on confidentiality and compliance, and how to document decisions so leadership can defend continued use or a pause. Troubleshooting emphasizes avoiding blind trust in vendor statements by requiring evidence, aligning reassessment to governance routines, and updating controls and training when external changes alter real exposure. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:45:32 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/a8807797/664324d1.mp3" length="42495809" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1062</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on Task 6 by showing how external changes should drive AI risk reassessment, because AAISM expects you to manage risk in a live environment where laws, vendor terms, threat activity, and AI capabilities can shift without your internal teams making any code change. You’ll learn how to track external triggers such as new regulatory requirements, updated contract language, vendor model behavior changes, platform logging or retention changes, and newly disclosed vulnerabilities that affect hosted services or integrations. We’ll walk through a scenario where a vendor announces a major model update and revised data handling practices, and you’ll practice deciding what to reassess first, how to validate the impact on confidentiality and compliance, and how to document decisions so leadership can defend continued use or a pause. Troubleshooting emphasizes avoiding blind trust in vendor statements by requiring evidence, aligning reassessment to governance routines, and updating controls and training when external changes alter real exposure. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/a8807797/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 56 — Build a reassessment cadence that prevents stale AI risk decisions (Task 6)</title>
      <itunes:episode>56</itunes:episode>
      <podcast:episode>56</podcast:episode>
      <itunes:title>Episode 56 — Build a reassessment cadence that prevents stale AI risk decisions (Task 6)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1d94f1c6-d531-4dde-9d06-a2ba38b3f155</guid>
      <link>https://share.transistor.fm/s/cb7d8d9e</link>
      <description>
        <![CDATA[<p>This episode covers Task 6 by teaching you to build a reassessment cadence that prevents stale AI risk decisions, because AAISM often rewards answers that institutionalize review routines rather than relying on ad hoc “we’ll revisit later” promises. You’ll define how cadence works in practice: scheduled reviews for high-impact systems, event-driven reviews for major changes, and lightweight check-ins that validate inventory accuracy, control health, and monitoring signals without creating unnecessary bureaucracy. We’ll use a scenario where an AI program scales from one business unit to enterprise-wide use, and you’ll practice setting cadence tiers based on sensitivity and criticality, assigning owners, and defining what evidence must be produced at each review to maintain conformity. Troubleshooting covers cadence that is too frequent to sustain, too infrequent to catch drift, or disconnected from change management, which results in approvals that persist long after the underlying assumptions are no longer true. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode covers Task 6 by teaching you to build a reassessment cadence that prevents stale AI risk decisions, because AAISM often rewards answers that institutionalize review routines rather than relying on ad hoc “we’ll revisit later” promises. You’ll define how cadence works in practice: scheduled reviews for high-impact systems, event-driven reviews for major changes, and lightweight check-ins that validate inventory accuracy, control health, and monitoring signals without creating unnecessary bureaucracy. We’ll use a scenario where an AI program scales from one business unit to enterprise-wide use, and you’ll practice setting cadence tiers based on sensitivity and criticality, assigning owners, and defining what evidence must be produced at each review to maintain conformity. Troubleshooting covers cadence that is too frequent to sustain, too infrequent to catch drift, or disconnected from change management, which results in approvals that persist long after the underlying assumptions are no longer true. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:45:46 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/cb7d8d9e/110b00c4.mp3" length="39221095" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>980</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode covers Task 6 by teaching you to build a reassessment cadence that prevents stale AI risk decisions, because AAISM often rewards answers that institutionalize review routines rather than relying on ad hoc “we’ll revisit later” promises. You’ll define how cadence works in practice: scheduled reviews for high-impact systems, event-driven reviews for major changes, and lightweight check-ins that validate inventory accuracy, control health, and monitoring signals without creating unnecessary bureaucracy. We’ll use a scenario where an AI program scales from one business unit to enterprise-wide use, and you’ll practice setting cadence tiers based on sensitivity and criticality, assigning owners, and defining what evidence must be produced at each review to maintain conformity. Troubleshooting covers cadence that is too frequent to sustain, too infrequent to catch drift, or disconnected from change management, which results in approvals that persist long after the underlying assumptions are no longer true. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/cb7d8d9e/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 57 — Design AI security testing that matches your model, data, and use case (Task 7)</title>
      <itunes:episode>57</itunes:episode>
      <podcast:episode>57</podcast:episode>
      <itunes:title>Episode 57 — Design AI security testing that matches your model, data, and use case (Task 7)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b2633f5b-95e8-40db-898a-96254332ccdb</guid>
      <link>https://share.transistor.fm/s/608de5d2</link>
      <description>
        <![CDATA[<p>This episode introduces Task 7 by teaching how to design AI security testing that matches your model, data, and use case, because AAISM expects you to test what can realistically fail in your specific deployment instead of applying generic security tests that miss AI-specific failure modes. You’ll define what “AI security testing” means here: validating access controls and data protections, probing for prompt injection and unsafe tool use, testing output safety and reliability boundaries, and confirming monitoring and logging are sufficient for investigation and audit. We’ll work through scenarios like a retrieval-augmented assistant with privileged data access and a customer chatbot with public inputs, showing how test design changes based on exposure, threat surface, and impact. Best practices include documenting test scope and results, linking findings to risk treatment decisions, and ensuring tests re-run after model updates, data changes, or configuration adjustments that can silently change behavior. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode introduces Task 7 by teaching how to design AI security testing that matches your model, data, and use case, because AAISM expects you to test what can realistically fail in your specific deployment instead of applying generic security tests that miss AI-specific failure modes. You’ll define what “AI security testing” means here: validating access controls and data protections, probing for prompt injection and unsafe tool use, testing output safety and reliability boundaries, and confirming monitoring and logging are sufficient for investigation and audit. We’ll work through scenarios like a retrieval-augmented assistant with privileged data access and a customer chatbot with public inputs, showing how test design changes based on exposure, threat surface, and impact. Best practices include documenting test scope and results, linking findings to risk treatment decisions, and ensuring tests re-run after model updates, data changes, or configuration adjustments that can silently change behavior. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:46:00 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/608de5d2/4d229579.mp3" length="39721609" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>992</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode introduces Task 7 by teaching how to design AI security testing that matches your model, data, and use case, because AAISM expects you to test what can realistically fail in your specific deployment instead of applying generic security tests that miss AI-specific failure modes. You’ll define what “AI security testing” means here: validating access controls and data protections, probing for prompt injection and unsafe tool use, testing output safety and reliability boundaries, and confirming monitoring and logging are sufficient for investigation and audit. We’ll work through scenarios like a retrieval-augmented assistant with privileged data access and a customer chatbot with public inputs, showing how test design changes based on exposure, threat surface, and impact. Best practices include documenting test scope and results, linking findings to risk treatment decisions, and ensuring tests re-run after model updates, data changes, or configuration adjustments that can silently change behavior. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/608de5d2/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 58 — Build AI vulnerability management from discovery to remediation (Task 7)</title>
      <itunes:episode>58</itunes:episode>
      <podcast:episode>58</podcast:episode>
      <itunes:title>Episode 58 — Build AI vulnerability management from discovery to remediation (Task 7)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e0983a43-1671-425d-885d-01a4ed90143d</guid>
      <link>https://share.transistor.fm/s/eddf6ebc</link>
      <description>
        <![CDATA[<p>This episode focuses on Task 7 by explaining how to build AI vulnerability management from discovery to remediation, because AAISM treats vulnerability management as an end-to-end control process that includes identification, prioritization, ownership, fixes, and evidence—not just scanning and ticket creation. You’ll learn how “vulnerabilities” show up in AI environments, including misconfigured access to model endpoints, overly permissive connectors, unsafe prompt handling, weak logging, unreviewed model changes, and dependency vulnerabilities in pipelines and hosting platforms. We’ll use a scenario where security discovers an AI integration that exposes sensitive data through an overly broad retrieval connector, and you’ll practice triaging severity based on impact and likelihood, assigning accountable owners, coordinating changes through governance, and validating that the fix aligns with policy and compliance obligations. Troubleshooting covers common breakdowns like unclear ownership across data, model, and platform teams, and remediation that fixes one path while leaving alternate exposure paths untouched. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on Task 7 by explaining how to build AI vulnerability management from discovery to remediation, because AAISM treats vulnerability management as an end-to-end control process that includes identification, prioritization, ownership, fixes, and evidence—not just scanning and ticket creation. You’ll learn how “vulnerabilities” show up in AI environments, including misconfigured access to model endpoints, overly permissive connectors, unsafe prompt handling, weak logging, unreviewed model changes, and dependency vulnerabilities in pipelines and hosting platforms. We’ll use a scenario where security discovers an AI integration that exposes sensitive data through an overly broad retrieval connector, and you’ll practice triaging severity based on impact and likelihood, assigning accountable owners, coordinating changes through governance, and validating that the fix aligns with policy and compliance obligations. Troubleshooting covers common breakdowns like unclear ownership across data, model, and platform teams, and remediation that fixes one path while leaving alternate exposure paths untouched. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:46:13 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/eddf6ebc/a9d1b583.mp3" length="44459162" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1111</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on Task 7 by explaining how to build AI vulnerability management from discovery to remediation, because AAISM treats vulnerability management as an end-to-end control process that includes identification, prioritization, ownership, fixes, and evidence—not just scanning and ticket creation. You’ll learn how “vulnerabilities” show up in AI environments, including misconfigured access to model endpoints, overly permissive connectors, unsafe prompt handling, weak logging, unreviewed model changes, and dependency vulnerabilities in pipelines and hosting platforms. We’ll use a scenario where security discovers an AI integration that exposes sensitive data through an overly broad retrieval connector, and you’ll practice triaging severity based on impact and likelihood, assigning accountable owners, coordinating changes through governance, and validating that the fix aligns with policy and compliance obligations. Troubleshooting covers common breakdowns like unclear ownership across data, model, and platform teams, and remediation that fixes one path while leaving alternate exposure paths untouched. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/eddf6ebc/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 59 — Retest and document fixes so AI vulnerabilities stay closed (Task 7)</title>
      <itunes:episode>59</itunes:episode>
      <podcast:episode>59</podcast:episode>
      <itunes:title>Episode 59 — Retest and document fixes so AI vulnerabilities stay closed (Task 7)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">28c18e12-d343-4b2f-8cd9-965e99590ef2</guid>
      <link>https://share.transistor.fm/s/0f2e9e5f</link>
      <description>
        <![CDATA[<p>This episode completes the Task 7 vulnerability thread by teaching how to retest and document fixes so vulnerabilities stay closed, because AAISM often tests whether you can prove remediation with evidence and prevent reintroduction through drift, model updates, or configuration changes. You’ll learn how to design retesting that actually validates risk reduction, such as confirming access permissions are narrowed, verifying prompt injection defenses behave as expected, ensuring logging captures relevant events, and confirming that guardrails remain effective after a vendor update or pipeline change. We’ll walk through a scenario where a fix removes one unsafe connector permission but later changes re-enable it through automated deployment, and you’ll practice setting controls that prevent regression, like change approval gates, configuration baselines, and monitoring alerts for permission expansions. Best practices include documenting what was fixed, why it mattered, how it was verified, and who approved closure, so audit and incident teams can trust the record. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode completes the Task 7 vulnerability thread by teaching how to retest and document fixes so vulnerabilities stay closed, because AAISM often tests whether you can prove remediation with evidence and prevent reintroduction through drift, model updates, or configuration changes. You’ll learn how to design retesting that actually validates risk reduction, such as confirming access permissions are narrowed, verifying prompt injection defenses behave as expected, ensuring logging captures relevant events, and confirming that guardrails remain effective after a vendor update or pipeline change. We’ll walk through a scenario where a fix removes one unsafe connector permission but later changes re-enable it through automated deployment, and you’ll practice setting controls that prevent regression, like change approval gates, configuration baselines, and monitoring alerts for permission expansions. Best practices include documenting what was fixed, why it mattered, how it was verified, and who approved closure, so audit and incident teams can trust the record. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:46:25 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/0f2e9e5f/b935ebf9.mp3" length="42192770" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1054</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode completes the Task 7 vulnerability thread by teaching how to retest and document fixes so vulnerabilities stay closed, because AAISM often tests whether you can prove remediation with evidence and prevent reintroduction through drift, model updates, or configuration changes. You’ll learn how to design retesting that actually validates risk reduction, such as confirming access permissions are narrowed, verifying prompt injection defenses behave as expected, ensuring logging captures relevant events, and confirming that guardrails remain effective after a vendor update or pipeline change. We’ll walk through a scenario where a fix removes one unsafe connector permission but later changes re-enable it through automated deployment, and you’ll practice setting controls that prevent regression, like change approval gates, configuration baselines, and monitoring alerts for permission expansions. Best practices include documenting what was fixed, why it mattered, how it was verified, and who approved closure, so audit and incident teams can trust the record. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/0f2e9e5f/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 60 — Embed vendor AI security requirements before procurement begins (Task 9)</title>
      <itunes:episode>60</itunes:episode>
      <podcast:episode>60</podcast:episode>
      <itunes:title>Episode 60 — Embed vendor AI security requirements before procurement begins (Task 9)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fd8a5bf2-0bc4-4071-bec9-a51ea5c7a1cb</guid>
      <link>https://share.transistor.fm/s/b12e5baa</link>
      <description>
        <![CDATA[<p>This episode introduces Task 9 by showing how to embed vendor AI security requirements before procurement begins, because AAISM expects you to shape vendor risk outcomes early through clear requirements, evidence expectations, and contractual controls rather than trying to “fix” weak vendor posture after adoption. You’ll define what vendor requirements should cover for AI services: data handling and retention, logging and monitoring support, model update and change notification, access control options, incident reporting timelines, security testing evidence, and clarity on how prompts and outputs may be stored or used. We’ll use a scenario where the business wants to rapidly adopt a hosted model service, and you’ll practice identifying requirements that prevent later surprises, such as undocumented subcontractors, limited audit rights, or default retention settings that conflict with privacy obligations. Troubleshooting focuses on avoiding vague security questionnaires by demanding specific evidence and decision rights that align with governance and risk acceptance processes. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode introduces Task 9 by showing how to embed vendor AI security requirements before procurement begins, because AAISM expects you to shape vendor risk outcomes early through clear requirements, evidence expectations, and contractual controls rather than trying to “fix” weak vendor posture after adoption. You’ll define what vendor requirements should cover for AI services: data handling and retention, logging and monitoring support, model update and change notification, access control options, incident reporting timelines, security testing evidence, and clarity on how prompts and outputs may be stored or used. We’ll use a scenario where the business wants to rapidly adopt a hosted model service, and you’ll practice identifying requirements that prevent later surprises, such as undocumented subcontractors, limited audit rights, or default retention settings that conflict with privacy obligations. Troubleshooting focuses on avoiding vague security questionnaires by demanding specific evidence and decision rights that align with governance and risk acceptance processes. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 11:46:38 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/b12e5baa/2c5e2e25.mp3" length="36963064" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>923</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode introduces Task 9 by showing how to embed vendor AI security requirements before procurement begins, because AAISM expects you to shape vendor risk outcomes early through clear requirements, evidence expectations, and contractual controls rather than trying to “fix” weak vendor posture after adoption. You’ll define what vendor requirements should cover for AI services: data handling and retention, logging and monitoring support, model update and change notification, access control options, incident reporting timelines, security testing evidence, and clarity on how prompts and outputs may be stored or used. We’ll use a scenario where the business wants to rapidly adopt a hosted model service, and you’ll practice identifying requirements that prevent later surprises, such as undocumented subcontractors, limited audit rights, or default retention settings that conflict with privacy obligations. Troubleshooting focuses on avoiding vague security questionnaires by demanding specific evidence and decision rights that align with governance and risk acceptance processes. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b12e5baa/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 61 — Audit AI deployment controls: approvals, gates, and rollback readiness (Task 8)</title>
      <itunes:episode>61</itunes:episode>
      <podcast:episode>61</podcast:episode>
      <itunes:title>Episode 61 — Audit AI deployment controls: approvals, gates, and rollback readiness (Task 8)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">504efe79-5968-4eb2-a850-ce2776914d9a</guid>
      <link>https://share.transistor.fm/s/e330a989</link>
      <description>
        <![CDATA[<p>This episode focuses on deployment controls for AI, because Task 8 scenarios often test whether you treat deployment as a controlled release with approvals, gates, and rollback readiness rather than a simple “go live.” You’ll learn how approval gates should confirm that requirements were met, validation evidence is complete, privacy and security constraints are enforced, and operational owners are ready to monitor and respond. We’ll cover what good “release readiness” looks like, including documented rollout plans, staged deployment options, defined success and failure thresholds, and a tested rollback path that can revert model versions and configuration safely. You’ll also learn common exam traps, such as assuming a vendor release process substitutes for internal governance, or approving deployment without clear monitoring and escalation commitments. By the end, you should be able to choose AAIA answers that emphasize auditable approvals, measurable gates, and recoverability when deployment outcomes are not acceptable. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on deployment controls for AI, because Task 8 scenarios often test whether you treat deployment as a controlled release with approvals, gates, and rollback readiness rather than a simple “go live.” You’ll learn how approval gates should confirm that requirements were met, validation evidence is complete, privacy and security constraints are enforced, and operational owners are ready to monitor and respond. We’ll cover what good “release readiness” looks like, including documented rollout plans, staged deployment options, defined success and failure thresholds, and a tested rollback path that can revert model versions and configuration safely. You’ll also learn common exam traps, such as assuming a vendor release process substitutes for internal governance, or approving deployment without clear monitoring and escalation commitments. By the end, you should be able to choose AAIA answers that emphasize auditable approvals, measurable gates, and recoverability when deployment outcomes are not acceptable. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 21:37:16 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/e330a989/99b8b534.mp3" length="37002784" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>924</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on deployment controls for AI, because Task 8 scenarios often test whether you treat deployment as a controlled release with approvals, gates, and rollback readiness rather than a simple “go live.” You’ll learn how approval gates should confirm that requirements were met, validation evidence is complete, privacy and security constraints are enforced, and operational owners are ready to monitor and respond. We’ll cover what good “release readiness” looks like, including documented rollout plans, staged deployment options, defined success and failure thresholds, and a tested rollback path that can revert model versions and configuration safely. You’ll also learn common exam traps, such as assuming a vendor release process substitutes for internal governance, or approving deployment without clear monitoring and escalation commitments. By the end, you should be able to choose AAIA answers that emphasize auditable approvals, measurable gates, and recoverability when deployment outcomes are not acceptable. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/e330a989/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 62 — Audit AI monitoring controls: drift, performance, and incident triggers (Task 8)</title>
      <itunes:episode>62</itunes:episode>
      <podcast:episode>62</podcast:episode>
      <itunes:title>Episode 62 — Audit AI monitoring controls: drift, performance, and incident triggers (Task 8)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0e35ef71-445c-48ce-9ea5-9f261c315bdd</guid>
      <link>https://share.transistor.fm/s/f99f7fd6</link>
      <description>
        <![CDATA[<p>This episode teaches you how to audit AI monitoring controls for drift, performance, and incident triggers, because Task 8 expects monitoring to be designed and proven, not improvised after problems surface. You’ll learn how to define what must be monitored based on decision impact, including performance trends, stability of input data, fairness and segment outcomes where relevant, and operational signals like exception volume and manual overrides. We’ll cover incident triggers as explicit rules that convert monitoring into action, such as thresholds that require human review, escalation to governance forums, rollback, or retraining under controlled change management. You’ll also learn what evidence auditors should request, including metric definitions, data sources, alert rules, escalation runbooks, and records showing that triggers led to timely decisions and corrective action. By the end, you should be able to answer exam items by selecting the control approach that makes monitoring auditable, actionable, and aligned to risk appetite. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to audit AI monitoring controls for drift, performance, and incident triggers, because Task 8 expects monitoring to be designed and proven, not improvised after problems surface. You’ll learn how to define what must be monitored based on decision impact, including performance trends, stability of input data, fairness and segment outcomes where relevant, and operational signals like exception volume and manual overrides. We’ll cover incident triggers as explicit rules that convert monitoring into action, such as thresholds that require human review, escalation to governance forums, rollback, or retraining under controlled change management. You’ll also learn what evidence auditors should request, including metric definitions, data sources, alert rules, escalation runbooks, and records showing that triggers led to timely decisions and corrective action. By the end, you should be able to answer exam items by selecting the control approach that makes monitoring auditable, actionable, and aligned to risk appetite. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:03:41 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/f99f7fd6/d53c03f7.mp3" length="40553350" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1013</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to audit AI monitoring controls for drift, performance, and incident triggers, because Task 8 expects monitoring to be designed and proven, not improvised after problems surface. You’ll learn how to define what must be monitored based on decision impact, including performance trends, stability of input data, fairness and segment outcomes where relevant, and operational signals like exception volume and manual overrides. We’ll cover incident triggers as explicit rules that convert monitoring into action, such as thresholds that require human review, escalation to governance forums, rollback, or retraining under controlled change management. You’ll also learn what evidence auditors should request, including metric definitions, data sources, alert rules, escalation runbooks, and records showing that triggers led to timely decisions and corrective action. By the end, you should be able to answer exam items by selecting the control approach that makes monitoring auditable, actionable, and aligned to risk appetite. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/f99f7fd6/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 63 — Audit AI decommissioning: retirement criteria and data cleanup duties (Task 8)</title>
      <itunes:episode>63</itunes:episode>
      <podcast:episode>63</podcast:episode>
      <itunes:title>Episode 63 — Audit AI decommissioning: retirement criteria and data cleanup duties (Task 8)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5edadd31-e897-4bac-b042-094de5aafc22</guid>
      <link>https://share.transistor.fm/s/1fe6aa41</link>
      <description>
        <![CDATA[<p>This episode focuses on AI decommissioning, because Task 8 scenarios sometimes test whether you can manage the end of the lifecycle with the same discipline as development and deployment. You’ll learn how to define retirement criteria, such as models that no longer meet requirements, models that create unacceptable harm, systems that cannot be supported operationally, or use cases that no longer have a lawful basis or approved purpose. We’ll cover what “clean shutdown” looks like in audit terms: disabling endpoints, removing access, updating dependent systems, and ensuring monitoring does not silently continue generating risk through leftover integrations. You’ll also learn how data cleanup duties fit governance, including retention and deletion requirements for training datasets, logs, decision records, and artifacts that must be preserved for auditability while still respecting privacy constraints. By the end, you should be able to choose exam answers that emphasize defined retirement decisions, accountable owners, and evidence that data and access were handled correctly after decommissioning. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on AI decommissioning, because Task 8 scenarios sometimes test whether you can manage the end of the lifecycle with the same discipline as development and deployment. You’ll learn how to define retirement criteria, such as models that no longer meet requirements, models that create unacceptable harm, systems that cannot be supported operationally, or use cases that no longer have a lawful basis or approved purpose. We’ll cover what “clean shutdown” looks like in audit terms: disabling endpoints, removing access, updating dependent systems, and ensuring monitoring does not silently continue generating risk through leftover integrations. You’ll also learn how data cleanup duties fit governance, including retention and deletion requirements for training datasets, logs, decision records, and artifacts that must be preserved for auditability while still respecting privacy constraints. By the end, you should be able to choose exam answers that emphasize defined retirement decisions, accountable owners, and evidence that data and access were handled correctly after decommissioning. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:04:14 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/1fe6aa41/1f812b30.mp3" length="38540872" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>963</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on AI decommissioning, because Task 8 scenarios sometimes test whether you can manage the end of the lifecycle with the same discipline as development and deployment. You’ll learn how to define retirement criteria, such as models that no longer meet requirements, models that create unacceptable harm, systems that cannot be supported operationally, or use cases that no longer have a lawful basis or approved purpose. We’ll cover what “clean shutdown” looks like in audit terms: disabling endpoints, removing access, updating dependent systems, and ensuring monitoring does not silently continue generating risk through leftover integrations. You’ll also learn how data cleanup duties fit governance, including retention and deletion requirements for training datasets, logs, decision records, and artifacts that must be preserved for auditability while still respecting privacy constraints. By the end, you should be able to choose exam answers that emphasize defined retirement decisions, accountable owners, and evidence that data and access were handled correctly after decommissioning. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/1fe6aa41/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 64 — Evaluate algorithms and models for alignment to business objectives (Task 9)</title>
      <itunes:episode>64</itunes:episode>
      <podcast:episode>64</podcast:episode>
      <itunes:title>Episode 64 — Evaluate algorithms and models for alignment to business objectives (Task 9)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">585d68db-f732-428f-aef5-ef3cf0cbcdf3</guid>
      <link>https://share.transistor.fm/s/d99a90e8</link>
      <description>
        <![CDATA[<p>This episode teaches you how to evaluate whether an algorithm or model aligns to business objectives, because Task 9 questions often focus on fit-for-purpose decisions rather than technical novelty. You’ll learn how alignment starts with the business decision and the acceptable tradeoffs, including what errors matter most, what fairness or safety constraints apply, and what level of explainability stakeholders need to trust and govern outcomes. We’ll cover how different model choices can optimize different outcomes, and why a model that maximizes accuracy might still be misaligned if it increases harm, reduces recourse, or creates monitoring complexity the organization cannot manage. You’ll also learn what evidence supports alignment, such as documented objective functions, acceptance criteria, evaluation results tied to business metrics, and approvals that acknowledge tradeoffs. By the end, you should be ready to answer exam scenarios by selecting the option that proves alignment through measurable objectives and governance evidence, not through vendor claims or technical buzzwords. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to evaluate whether an algorithm or model aligns to business objectives, because Task 9 questions often focus on fit-for-purpose decisions rather than technical novelty. You’ll learn how alignment starts with the business decision and the acceptable tradeoffs, including what errors matter most, what fairness or safety constraints apply, and what level of explainability stakeholders need to trust and govern outcomes. We’ll cover how different model choices can optimize different outcomes, and why a model that maximizes accuracy might still be misaligned if it increases harm, reduces recourse, or creates monitoring complexity the organization cannot manage. You’ll also learn what evidence supports alignment, such as documented objective functions, acceptance criteria, evaluation results tied to business metrics, and approvals that acknowledge tradeoffs. By the end, you should be ready to answer exam scenarios by selecting the option that proves alignment through measurable objectives and governance evidence, not through vendor claims or technical buzzwords. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:04:36 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/d99a90e8/6c628026.mp3" length="34133489" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>852</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to evaluate whether an algorithm or model aligns to business objectives, because Task 9 questions often focus on fit-for-purpose decisions rather than technical novelty. You’ll learn how alignment starts with the business decision and the acceptable tradeoffs, including what errors matter most, what fairness or safety constraints apply, and what level of explainability stakeholders need to trust and govern outcomes. We’ll cover how different model choices can optimize different outcomes, and why a model that maximizes accuracy might still be misaligned if it increases harm, reduces recourse, or creates monitoring complexity the organization cannot manage. You’ll also learn what evidence supports alignment, such as documented objective functions, acceptance criteria, evaluation results tied to business metrics, and approvals that acknowledge tradeoffs. By the end, you should be ready to answer exam scenarios by selecting the option that proves alignment through measurable objectives and governance evidence, not through vendor claims or technical buzzwords. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/d99a90e8/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 65 — Test model alignment to policy: what it should do versus what it does (Task 9)</title>
      <itunes:episode>65</itunes:episode>
      <podcast:episode>65</podcast:episode>
      <itunes:title>Episode 65 — Test model alignment to policy: what it should do versus what it does (Task 9)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">71dae75d-c730-4a8f-ac77-ea981a6b313a</guid>
      <link>https://share.transistor.fm/s/8fdb5ca6</link>
      <description>
        <![CDATA[<p>This episode focuses on testing model alignment to policy by comparing what the model should do to what it actually does, which is a common AAIA scenario pattern when organizations have policies but cannot prove behavior matches them. You’ll learn how to translate policy constraints into test cases, including prohibited uses, required disclosures, human review requirements, and limits on sensitive data use or inference. We’ll cover practical testing methods, such as controlled input scenarios, sampling real outputs, reviewing exception handling, and validating that safeguards like filters, thresholds, and escalation triggers fire when policy boundaries are approached. You’ll also learn how auditors document alignment testing so results are defensible, including criteria, sample selection, observed outcomes, and corrective actions when misalignment is found. By the end, you should be able to choose exam answers that emphasize testable policy criteria and evidence-based alignment, not assumptions that “the model follows the rules.” Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on testing model alignment to policy by comparing what the model should do to what it actually does, which is a common AAIA scenario pattern when organizations have policies but cannot prove behavior matches them. You’ll learn how to translate policy constraints into test cases, including prohibited uses, required disclosures, human review requirements, and limits on sensitive data use or inference. We’ll cover practical testing methods, such as controlled input scenarios, sampling real outputs, reviewing exception handling, and validating that safeguards like filters, thresholds, and escalation triggers fire when policy boundaries are approached. You’ll also learn how auditors document alignment testing so results are defensible, including criteria, sample selection, observed outcomes, and corrective actions when misalignment is found. By the end, you should be able to choose exam answers that emphasize testable policy criteria and evidence-based alignment, not assumptions that “the model follows the rules.” Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:05:04 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/8fdb5ca6/a83031ee.mp3" length="35715468" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>892</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on testing model alignment to policy by comparing what the model should do to what it actually does, which is a common AAIA scenario pattern when organizations have policies but cannot prove behavior matches them. You’ll learn how to translate policy constraints into test cases, including prohibited uses, required disclosures, human review requirements, and limits on sensitive data use or inference. We’ll cover practical testing methods, such as controlled input scenarios, sampling real outputs, reviewing exception handling, and validating that safeguards like filters, thresholds, and escalation triggers fire when policy boundaries are approached. You’ll also learn how auditors document alignment testing so results are defensible, including criteria, sample selection, observed outcomes, and corrective actions when misalignment is found. By the end, you should be able to choose exam answers that emphasize testable policy criteria and evidence-based alignment, not assumptions that “the model follows the rules.” Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/8fdb5ca6/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 66 — Evaluate model explainability expectations without overpromising certainty (Task 9)</title>
      <itunes:episode>66</itunes:episode>
      <podcast:episode>66</podcast:episode>
      <itunes:title>Episode 66 — Evaluate model explainability expectations without overpromising certainty (Task 9)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">13706615-1818-4305-b0d5-58a4bfd607e7</guid>
      <link>https://share.transistor.fm/s/bac8b6ae</link>
      <description>
        <![CDATA[<p>This episode teaches you how to evaluate explainability expectations without overpromising certainty, because Task 9 questions often test whether you can set realistic transparency requirements based on decision impact and stakeholder needs. You’ll learn the difference between explaining how a model generally behaves, explaining why a specific output occurred, and explaining whether the outcome is fair, compliant, and appropriate for the policy context. We’ll cover how explainability requirements should be defined up front, including what audiences need to understand, what disclosures are required, and what evidence must exist for audit and recourse. You’ll also learn common exam pitfalls, such as assuming explainability tools eliminate bias, or assuming any explanation is acceptable even when it is not actionable or verifiable. By the end, you should be able to answer exam scenarios by selecting the option that sets explainability as a bounded, testable requirement supported by documentation and operational processes, not as a promise of perfect understanding. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to evaluate explainability expectations without overpromising certainty, because Task 9 questions often test whether you can set realistic transparency requirements based on decision impact and stakeholder needs. You’ll learn the difference between explaining how a model generally behaves, explaining why a specific output occurred, and explaining whether the outcome is fair, compliant, and appropriate for the policy context. We’ll cover how explainability requirements should be defined up front, including what audiences need to understand, what disclosures are required, and what evidence must exist for audit and recourse. You’ll also learn common exam pitfalls, such as assuming explainability tools eliminate bias, or assuming any explanation is acceptable even when it is not actionable or verifiable. By the end, you should be able to answer exam scenarios by selecting the option that sets explainability as a bounded, testable requirement supported by documentation and operational processes, not as a promise of perfect understanding. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:05:37 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/bac8b6ae/b75f66ba.mp3" length="35328866" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>882</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to evaluate explainability expectations without overpromising certainty, because Task 9 questions often test whether you can set realistic transparency requirements based on decision impact and stakeholder needs. You’ll learn the difference between explaining how a model generally behaves, explaining why a specific output occurred, and explaining whether the outcome is fair, compliant, and appropriate for the policy context. We’ll cover how explainability requirements should be defined up front, including what audiences need to understand, what disclosures are required, and what evidence must exist for audit and recourse. You’ll also learn common exam pitfalls, such as assuming explainability tools eliminate bias, or assuming any explanation is acceptable even when it is not actionable or verifiable. By the end, you should be able to answer exam scenarios by selecting the option that sets explainability as a bounded, testable requirement supported by documentation and operational processes, not as a promise of perfect understanding. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/bac8b6ae/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 67 — Evaluate model performance claims using audit-grade skepticism (Task 9)</title>
      <itunes:episode>67</itunes:episode>
      <podcast:episode>67</podcast:episode>
      <itunes:title>Episode 67 — Evaluate model performance claims using audit-grade skepticism (Task 9)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d61138af-8be7-47f7-905e-ae8b2ec33428</guid>
      <link>https://share.transistor.fm/s/124b05d3</link>
      <description>
        <![CDATA[<p>This episode focuses on evaluating model performance claims with audit-grade skepticism, because AAIA scenarios often include impressive numbers that are meaningless without context, constraints, and evidence. You’ll learn how to challenge claims by asking what data was used, how it was sampled, whether leakage was prevented, what baseline was compared, and whether performance holds across relevant segments and edge cases. We’ll cover how acceptance criteria should be tied to business objectives and risk appetite, including what error types are unacceptable, what fairness checks are required, and what monitoring will detect performance decay in production. You’ll also learn what evidence turns claims into proof, such as documented evaluation methodology, reproducible test results, independent review, and records showing that issues discovered in testing were corrected before approval. By the end, you should be able to choose exam answers that demand verifiable performance evidence and realistic operational commitments rather than trusting marketing-style metrics. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on evaluating model performance claims with audit-grade skepticism, because AAIA scenarios often include impressive numbers that are meaningless without context, constraints, and evidence. You’ll learn how to challenge claims by asking what data was used, how it was sampled, whether leakage was prevented, what baseline was compared, and whether performance holds across relevant segments and edge cases. We’ll cover how acceptance criteria should be tied to business objectives and risk appetite, including what error types are unacceptable, what fairness checks are required, and what monitoring will detect performance decay in production. You’ll also learn what evidence turns claims into proof, such as documented evaluation methodology, reproducible test results, independent review, and records showing that issues discovered in testing were corrected before approval. By the end, you should be able to choose exam answers that demand verifiable performance evidence and realistic operational commitments rather than trusting marketing-style metrics. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:06:09 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/124b05d3/61a56b97.mp3" length="34441723" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>860</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on evaluating model performance claims with audit-grade skepticism, because AAIA scenarios often include impressive numbers that are meaningless without context, constraints, and evidence. You’ll learn how to challenge claims by asking what data was used, how it was sampled, whether leakage was prevented, what baseline was compared, and whether performance holds across relevant segments and edge cases. We’ll cover how acceptance criteria should be tied to business objectives and risk appetite, including what error types are unacceptable, what fairness checks are required, and what monitoring will detect performance decay in production. You’ll also learn what evidence turns claims into proof, such as documented evaluation methodology, reproducible test results, independent review, and records showing that issues discovered in testing were corrected before approval. By the end, you should be able to choose exam answers that demand verifiable performance evidence and realistic operational commitments rather than trusting marketing-style metrics. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/124b05d3/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 68 — Evaluate change management for AI where “updates” can change outcomes (Task 13)</title>
      <itunes:episode>68</itunes:episode>
      <podcast:episode>68</podcast:episode>
      <itunes:title>Episode 68 — Evaluate change management for AI where “updates” can change outcomes (Task 13)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">aad1a82d-9318-4031-a664-b64eeba487df</guid>
      <link>https://share.transistor.fm/s/9dee9a94</link>
      <description>
        <![CDATA[<p>This episode explains why change management for AI must be stricter than typical software change management, because in AI, “updates” can silently change outcomes even when interfaces stay the same. You’ll learn how changes can enter through code, data sources, feature logic, model parameters, infrastructure dependencies, and even operating conditions, and why each path needs control, testing, and documentation. We’ll cover what strong AI change management looks like: defined change categories, required approvals, validation requirements proportional to risk, and clear communication to stakeholders when decision behavior changes. You’ll also learn the evidence auditors expect, including change tickets tied to risk assessments, test results, approvals, version histories, and post-change monitoring plans. By the end, you should be able to answer AAIA questions by selecting the option that treats AI changes as outcome-changing events with measurable controls, not as routine patches pushed on a schedule. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains why change management for AI must be stricter than typical software change management, because in AI, “updates” can silently change outcomes even when interfaces stay the same. You’ll learn how changes can enter through code, data sources, feature logic, model parameters, infrastructure dependencies, and even operating conditions, and why each path needs control, testing, and documentation. We’ll cover what strong AI change management looks like: defined change categories, required approvals, validation requirements proportional to risk, and clear communication to stakeholders when decision behavior changes. You’ll also learn the evidence auditors expect, including change tickets tied to risk assessments, test results, approvals, version histories, and post-change monitoring plans. By the end, you should be able to answer AAIA questions by selecting the option that treats AI changes as outcome-changing events with measurable controls, not as routine patches pushed on a schedule. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:06:29 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/9dee9a94/7141243b.mp3" length="35791748" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>894</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains why change management for AI must be stricter than typical software change management, because in AI, “updates” can silently change outcomes even when interfaces stay the same. You’ll learn how changes can enter through code, data sources, feature logic, model parameters, infrastructure dependencies, and even operating conditions, and why each path needs control, testing, and documentation. We’ll cover what strong AI change management looks like: defined change categories, required approvals, validation requirements proportional to risk, and clear communication to stakeholders when decision behavior changes. You’ll also learn the evidence auditors expect, including change tickets tied to risk assessments, test results, approvals, version histories, and post-change monitoring plans. By the end, you should be able to answer AAIA questions by selecting the option that treats AI changes as outcome-changing events with measurable controls, not as routine patches pushed on a schedule. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/9dee9a94/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 69 — Audit model update approvals, testing evidence, and release readiness (Task 13)</title>
      <itunes:episode>69</itunes:episode>
      <podcast:episode>69</podcast:episode>
      <itunes:title>Episode 69 — Audit model update approvals, testing evidence, and release readiness (Task 13)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2ac5e4c2-6094-4016-8eb7-71ee5f87f0d5</guid>
      <link>https://share.transistor.fm/s/c375ebf6</link>
      <description>
        <![CDATA[<p>This episode focuses on auditing model updates by verifying approvals, testing evidence, and release readiness, because Task 13 scenarios often revolve around a model change that created unexpected harm or compliance issues. You’ll learn how update approvals should confirm that the change is justified, risks are assessed, stakeholders are informed, and acceptance criteria are met, especially when the model influences high-impact decisions. We’ll cover what testing evidence should include, such as regression testing against prior behavior, validation on representative data, segment checks for fairness, and security and privacy validations where applicable. Release readiness will be framed as operational preparedness: monitoring rules updated, rollback plans tested, documentation refreshed, and owners assigned for post-release review. By the end, you should be able to choose exam answers that emphasize a complete, auditable update package—approval, evidence, readiness—rather than focusing only on the technical act of retraining or redeploying. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on auditing model updates by verifying approvals, testing evidence, and release readiness, because Task 13 scenarios often revolve around a model change that created unexpected harm or compliance issues. You’ll learn how update approvals should confirm that the change is justified, risks are assessed, stakeholders are informed, and acceptance criteria are met, especially when the model influences high-impact decisions. We’ll cover what testing evidence should include, such as regression testing against prior behavior, validation on representative data, segment checks for fairness, and security and privacy validations where applicable. Release readiness will be framed as operational preparedness: monitoring rules updated, rollback plans tested, documentation refreshed, and owners assigned for post-release review. By the end, you should be able to choose exam answers that emphasize a complete, auditable update package—approval, evidence, readiness—rather than focusing only on the technical act of retraining or redeploying. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:06:56 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/c375ebf6/42c8a64a.mp3" length="35412450" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>884</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on auditing model updates by verifying approvals, testing evidence, and release readiness, because Task 13 scenarios often revolve around a model change that created unexpected harm or compliance issues. You’ll learn how update approvals should confirm that the change is justified, risks are assessed, stakeholders are informed, and acceptance criteria are met, especially when the model influences high-impact decisions. We’ll cover what testing evidence should include, such as regression testing against prior behavior, validation on representative data, segment checks for fairness, and security and privacy validations where applicable. Release readiness will be framed as operational preparedness: monitoring rules updated, rollback plans tested, documentation refreshed, and owners assigned for post-release review. By the end, you should be able to choose exam answers that emphasize a complete, auditable update package—approval, evidence, readiness—rather than focusing only on the technical act of retraining or redeploying. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/c375ebf6/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 70 — Audit emergency changes for AI when risk forces fast decisions (Task 13)</title>
      <itunes:episode>70</itunes:episode>
      <podcast:episode>70</podcast:episode>
      <itunes:title>Episode 70 — Audit emergency changes for AI when risk forces fast decisions (Task 13)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7ac7cb91-2b38-4f3e-9939-4afb92103210</guid>
      <link>https://share.transistor.fm/s/f57a6829</link>
      <description>
        <![CDATA[<p>This episode teaches you how to audit emergency changes for AI when risk forces fast decisions, because AAIA questions often test whether you can balance urgency with governance instead of abandoning controls under pressure. You’ll learn what qualifies as an emergency change, how emergency procedures should differ from normal change, and what minimum controls must still exist, including documented rationale, defined approval authority, limited scope, and immediate monitoring after the change. We’ll cover common emergency scenarios like harmful outputs, security abuse, major drift, or regulatory exposure, and how organizations should respond with rollback, feature disabling, stricter human review, or rapid retraining under controlled conditions. You’ll also learn what evidence auditors should expect after the fact, such as incident records, emergency approvals, validation notes, lessons learned, and follow-up remediation to prevent repeated emergencies. By the end, you should be ready to choose exam answers that preserve accountability and evidence even when speed matters. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to audit emergency changes for AI when risk forces fast decisions, because AAIA questions often test whether you can balance urgency with governance instead of abandoning controls under pressure. You’ll learn what qualifies as an emergency change, how emergency procedures should differ from normal change, and what minimum controls must still exist, including documented rationale, defined approval authority, limited scope, and immediate monitoring after the change. We’ll cover common emergency scenarios like harmful outputs, security abuse, major drift, or regulatory exposure, and how organizations should respond with rollback, feature disabling, stricter human review, or rapid retraining under controlled conditions. You’ll also learn what evidence auditors should expect after the fact, such as incident records, emergency approvals, validation notes, lessons learned, and follow-up remediation to prevent repeated emergencies. By the end, you should be ready to choose exam answers that preserve accountability and evidence even when speed matters. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:07:19 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/f57a6829/52babd84.mp3" length="36272387" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>906</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to audit emergency changes for AI when risk forces fast decisions, because AAIA questions often test whether you can balance urgency with governance instead of abandoning controls under pressure. You’ll learn what qualifies as an emergency change, how emergency procedures should differ from normal change, and what minimum controls must still exist, including documented rationale, defined approval authority, limited scope, and immediate monitoring after the change. We’ll cover common emergency scenarios like harmful outputs, security abuse, major drift, or regulatory exposure, and how organizations should respond with rollback, feature disabling, stricter human review, or rapid retraining under controlled conditions. You’ll also learn what evidence auditors should expect after the fact, such as incident records, emergency approvals, validation notes, lessons learned, and follow-up remediation to prevent repeated emergencies. By the end, you should be ready to choose exam answers that preserve accountability and evidence even when speed matters. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/f57a6829/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 71 — Evaluate configuration management for AI across code, data, and models (Task 14)</title>
      <itunes:episode>71</itunes:episode>
      <podcast:episode>71</podcast:episode>
      <itunes:title>Episode 71 — Evaluate configuration management for AI across code, data, and models (Task 14)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f90649a5-ce4a-4568-aeee-bb020cc55cee</guid>
      <link>https://share.transistor.fm/s/8a4c5145</link>
      <description>
        <![CDATA[<p>This episode explains how configuration management for AI must cover more than application settings, because Task 14 expects you to control anything that can change outcomes, including code, data pipelines, and model artifacts. You’ll learn how to identify configuration items that matter most—feature logic, preprocessing rules, training parameters, thresholds, prompts or templates where applicable, and deployment settings—then confirm they are versioned, approved, and traceable to specific releases. We’ll cover why “small” configuration changes can be high-risk in AI, such as changing a cutoff score, altering a data normalization step, or switching a dependency version that shifts model behavior. You’ll also learn what evidence auditors rely on, including configuration baselines, change histories, access logs, and release records that link configuration states to observed outcomes in production. By the end, you should be able to answer exam scenarios by choosing the option that enforces controlled, auditable configuration across the full AI system, not just the code repository. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains how configuration management for AI must cover more than application settings, because Task 14 expects you to control anything that can change outcomes, including code, data pipelines, and model artifacts. You’ll learn how to identify configuration items that matter most—feature logic, preprocessing rules, training parameters, thresholds, prompts or templates where applicable, and deployment settings—then confirm they are versioned, approved, and traceable to specific releases. We’ll cover why “small” configuration changes can be high-risk in AI, such as changing a cutoff score, altering a data normalization step, or switching a dependency version that shifts model behavior. You’ll also learn what evidence auditors rely on, including configuration baselines, change histories, access logs, and release records that link configuration states to observed outcomes in production. By the end, you should be able to answer exam scenarios by choosing the option that enforces controlled, auditable configuration across the full AI system, not just the code repository. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:07:52 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/8a4c5145/44520562.mp3" length="42903325" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1072</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains how configuration management for AI must cover more than application settings, because Task 14 expects you to control anything that can change outcomes, including code, data pipelines, and model artifacts. You’ll learn how to identify configuration items that matter most—feature logic, preprocessing rules, training parameters, thresholds, prompts or templates where applicable, and deployment settings—then confirm they are versioned, approved, and traceable to specific releases. We’ll cover why “small” configuration changes can be high-risk in AI, such as changing a cutoff score, altering a data normalization step, or switching a dependency version that shifts model behavior. You’ll also learn what evidence auditors rely on, including configuration baselines, change histories, access logs, and release records that link configuration states to observed outcomes in production. By the end, you should be able to answer exam scenarios by choosing the option that enforces controlled, auditable configuration across the full AI system, not just the code repository. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/8a4c5145/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 72 — Prove reproducibility: model versions, parameters, and training snapshots (Task 14)</title>
      <itunes:episode>72</itunes:episode>
      <podcast:episode>72</podcast:episode>
      <itunes:title>Episode 72 — Prove reproducibility: model versions, parameters, and training snapshots (Task 14)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">8506beb7-82b2-49b8-98f6-b9bc0159f6c2</guid>
      <link>https://share.transistor.fm/s/be6ca581</link>
      <description>
        <![CDATA[<p>This episode teaches you how to prove reproducibility for AI systems, because Task 14 scenarios often test whether the organization can recreate a model’s behavior when questions arise about fairness, safety, accuracy, or compliance. You’ll learn what reproducibility requires in practice: preserved model versions, captured training parameters, documented feature pipelines, and training snapshots or references that allow the same data state to be re-used under controlled conditions. We’ll cover why reproducibility is an audit-critical capability, including investigating incidents, validating changes, responding to stakeholder complaints, and demonstrating that governance decisions were based on reliable evidence. You’ll also learn common breakdowns, such as missing dataset versions, untracked parameter changes, or reliance on third-party components that change without notice, and what controls and documentation prevent those failures. By the end, you should be able to choose AAIA answers that prioritize reproducibility evidence and control discipline over vague claims that the model can be “retrained if needed.” Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to prove reproducibility for AI systems, because Task 14 scenarios often test whether the organization can recreate a model’s behavior when questions arise about fairness, safety, accuracy, or compliance. You’ll learn what reproducibility requires in practice: preserved model versions, captured training parameters, documented feature pipelines, and training snapshots or references that allow the same data state to be re-used under controlled conditions. We’ll cover why reproducibility is an audit-critical capability, including investigating incidents, validating changes, responding to stakeholder complaints, and demonstrating that governance decisions were based on reliable evidence. You’ll also learn common breakdowns, such as missing dataset versions, untracked parameter changes, or reliance on third-party components that change without notice, and what controls and documentation prevent those failures. By the end, you should be able to choose AAIA answers that prioritize reproducibility evidence and control discipline over vague claims that the model can be “retrained if needed.” Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:08:29 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/be6ca581/15965005.mp3" length="37480311" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>936</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to prove reproducibility for AI systems, because Task 14 scenarios often test whether the organization can recreate a model’s behavior when questions arise about fairness, safety, accuracy, or compliance. You’ll learn what reproducibility requires in practice: preserved model versions, captured training parameters, documented feature pipelines, and training snapshots or references that allow the same data state to be re-used under controlled conditions. We’ll cover why reproducibility is an audit-critical capability, including investigating incidents, validating changes, responding to stakeholder complaints, and demonstrating that governance decisions were based on reliable evidence. You’ll also learn common breakdowns, such as missing dataset versions, untracked parameter changes, or reliance on third-party components that change without notice, and what controls and documentation prevent those failures. By the end, you should be able to choose AAIA answers that prioritize reproducibility evidence and control discipline over vague claims that the model can be “retrained if needed.” Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/be6ca581/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 73 — Audit access to model artifacts, pipelines, and configuration repositories (Task 14)</title>
      <itunes:episode>73</itunes:episode>
      <podcast:episode>73</podcast:episode>
      <itunes:title>Episode 73 — Audit access to model artifacts, pipelines, and configuration repositories (Task 14)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">30bc2dc6-e404-42cc-961e-7754d5747152</guid>
      <link>https://share.transistor.fm/s/a3f879d9</link>
      <description>
        <![CDATA[<p>This episode focuses on auditing access controls for model artifacts, pipelines, and configuration repositories, because Task 14 expects you to protect the elements that directly shape AI outcomes and evidence integrity. You’ll learn how to evaluate who can view, modify, approve, and deploy model versions, datasets, feature logic, and configuration baselines, and why “developer convenience” is not a valid reason for broad, unmanaged access. We’ll cover practical access control expectations such as least privilege, separation of duties where risk justifies it, strong authentication, audit logging, and documented approvals for privileged changes. You’ll also learn how to test whether access controls are operating, including reviewing role assignments, sampling change events for proper approvals, validating logging completeness, and checking whether service accounts and automation are governed with the same rigor as humans. By the end, you should be able to answer exam scenarios by selecting the approach that preserves integrity, accountability, and traceability across the AI build and release pipeline. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on auditing access controls for model artifacts, pipelines, and configuration repositories, because Task 14 expects you to protect the elements that directly shape AI outcomes and evidence integrity. You’ll learn how to evaluate who can view, modify, approve, and deploy model versions, datasets, feature logic, and configuration baselines, and why “developer convenience” is not a valid reason for broad, unmanaged access. We’ll cover practical access control expectations such as least privilege, separation of duties where risk justifies it, strong authentication, audit logging, and documented approvals for privileged changes. You’ll also learn how to test whether access controls are operating, including reviewing role assignments, sampling change events for proper approvals, validating logging completeness, and checking whether service accounts and automation are governed with the same rigor as humans. By the end, you should be able to answer exam scenarios by selecting the approach that preserves integrity, accountability, and traceability across the AI build and release pipeline. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:08:55 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/a3f879d9/15ce00da.mp3" length="42527170" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1062</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on auditing access controls for model artifacts, pipelines, and configuration repositories, because Task 14 expects you to protect the elements that directly shape AI outcomes and evidence integrity. You’ll learn how to evaluate who can view, modify, approve, and deploy model versions, datasets, feature logic, and configuration baselines, and why “developer convenience” is not a valid reason for broad, unmanaged access. We’ll cover practical access control expectations such as least privilege, separation of duties where risk justifies it, strong authentication, audit logging, and documented approvals for privileged changes. You’ll also learn how to test whether access controls are operating, including reviewing role assignments, sampling change events for proper approvals, validating logging completeness, and checking whether service accounts and automation are governed with the same rigor as humans. By the end, you should be able to answer exam scenarios by selecting the approach that preserves integrity, accountability, and traceability across the AI build and release pipeline. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/a3f879d9/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 74 — Supervise AI outputs: detect harmful decisions before customers do (Domain 2D)</title>
      <itunes:episode>74</itunes:episode>
      <podcast:episode>74</podcast:episode>
      <itunes:title>Episode 74 — Supervise AI outputs: detect harmful decisions before customers do (Domain 2D)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fb760989-6d12-42ac-ad44-f2627b44b10b</guid>
      <link>https://share.transistor.fm/s/c0962316</link>
      <description>
        <![CDATA[<p>This episode explains how to supervise AI outputs so harmful decisions are detected internally before customers, employees, or regulators surface the problem, which is a core Domain 2D expectation. You’ll learn to treat supervision as a control system that combines monitoring metrics, sampling strategies, human review, and escalation triggers tied to decision impact. We’ll cover how supervision differs from basic performance monitoring by focusing on real-world outcomes, including fairness signals, safety incidents, unusual distribution shifts, complaint patterns, and increases in manual overrides that indicate the model is no longer behaving as expected. You’ll also learn how to design supervision to match the use case, such as tighter supervision for high-impact decisions and more targeted sampling for lower-impact scenarios, while still maintaining auditable evidence. By the end, you should be able to choose exam answers that build proactive detection and accountable response, rather than waiting for external harm to reveal control failure. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains how to supervise AI outputs so harmful decisions are detected internally before customers, employees, or regulators surface the problem, which is a core Domain 2D expectation. You’ll learn to treat supervision as a control system that combines monitoring metrics, sampling strategies, human review, and escalation triggers tied to decision impact. We’ll cover how supervision differs from basic performance monitoring by focusing on real-world outcomes, including fairness signals, safety incidents, unusual distribution shifts, complaint patterns, and increases in manual overrides that indicate the model is no longer behaving as expected. You’ll also learn how to design supervision to match the use case, such as tighter supervision for high-impact decisions and more targeted sampling for lower-impact scenarios, while still maintaining auditable evidence. By the end, you should be able to choose exam answers that build proactive detection and accountable response, rather than waiting for external harm to reveal control failure. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:09:24 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/c0962316/4688d599.mp3" length="45084023" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1126</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains how to supervise AI outputs so harmful decisions are detected internally before customers, employees, or regulators surface the problem, which is a core Domain 2D expectation. You’ll learn to treat supervision as a control system that combines monitoring metrics, sampling strategies, human review, and escalation triggers tied to decision impact. We’ll cover how supervision differs from basic performance monitoring by focusing on real-world outcomes, including fairness signals, safety incidents, unusual distribution shifts, complaint patterns, and increases in manual overrides that indicate the model is no longer behaving as expected. You’ll also learn how to design supervision to match the use case, such as tighter supervision for high-impact decisions and more targeted sampling for lower-impact scenarios, while still maintaining auditable evidence. By the end, you should be able to choose exam answers that build proactive detection and accountable response, rather than waiting for external harm to reveal control failure. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/c0962316/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 75 — Build human oversight triggers for AI decisions that need escalation (Domain 2D)</title>
      <itunes:episode>75</itunes:episode>
      <podcast:episode>75</podcast:episode>
      <itunes:title>Episode 75 — Build human oversight triggers for AI decisions that need escalation (Domain 2D)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4e81937e-b721-482a-af87-6c56597a2c03</guid>
      <link>https://share.transistor.fm/s/d388b890</link>
      <description>
        <![CDATA[<p>This episode teaches you how to build human oversight triggers that route the right AI decisions to review and escalation, because Domain 2D frequently tests whether you can define oversight that is targeted, timely, and defensible. You’ll learn how to decide what should trigger review, including low-confidence outputs, policy exceptions, high-impact outcomes, novel situations outside training conditions, and decisions that affect protected or vulnerable groups. We’ll cover how to express triggers as measurable rules, such as thresholds, anomaly detection flags, segmentation-based checks, and event-based triggers tied to complaint volume or incident indicators. You’ll also learn what evidence auditors expect, including documented trigger logic, assigned reviewer roles, training and guidance for reviewers, and records showing how escalations were handled and what corrective actions followed. By the end, you should be able to choose AAIA answers that match oversight intensity to risk and prove escalation is real, not symbolic. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to build human oversight triggers that route the right AI decisions to review and escalation, because Domain 2D frequently tests whether you can define oversight that is targeted, timely, and defensible. You’ll learn how to decide what should trigger review, including low-confidence outputs, policy exceptions, high-impact outcomes, novel situations outside training conditions, and decisions that affect protected or vulnerable groups. We’ll cover how to express triggers as measurable rules, such as thresholds, anomaly detection flags, segmentation-based checks, and event-based triggers tied to complaint volume or incident indicators. You’ll also learn what evidence auditors expect, including documented trigger logic, assigned reviewer roles, training and guidance for reviewers, and records showing how escalations were handled and what corrective actions followed. By the end, you should be able to choose AAIA answers that match oversight intensity to risk and prove escalation is real, not symbolic. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:10:21 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/d388b890/4178f2fe.mp3" length="44732941" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1117</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to build human oversight triggers that route the right AI decisions to review and escalation, because Domain 2D frequently tests whether you can define oversight that is targeted, timely, and defensible. You’ll learn how to decide what should trigger review, including low-confidence outputs, policy exceptions, high-impact outcomes, novel situations outside training conditions, and decisions that affect protected or vulnerable groups. We’ll cover how to express triggers as measurable rules, such as thresholds, anomaly detection flags, segmentation-based checks, and event-based triggers tied to complaint volume or incident indicators. You’ll also learn what evidence auditors expect, including documented trigger logic, assigned reviewer roles, training and guidance for reviewers, and records showing how escalations were handled and what corrective actions followed. By the end, you should be able to choose AAIA answers that match oversight intensity to risk and prove escalation is real, not symbolic. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/d388b890/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 76 — Validate supervision of AI impacts on fairness, safety, and quality (Domain 2D)</title>
      <itunes:episode>76</itunes:episode>
      <podcast:episode>76</podcast:episode>
      <itunes:title>Episode 76 — Validate supervision of AI impacts on fairness, safety, and quality (Domain 2D)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">17f372f9-d402-484a-a930-42d2f50ae03d</guid>
      <link>https://share.transistor.fm/s/84e9716c</link>
      <description>
        <![CDATA[<p>This episode focuses on validating whether supervision actually covers fairness, safety, and quality impacts, because Domain 2D expects oversight to detect harm patterns that pure accuracy metrics can miss. You’ll learn how to define what “fairness” and “safety” mean in the organization’s context, then verify that supervision mechanisms measure those outcomes using segment reporting, sampling, and escalation criteria aligned to policy and risk appetite. We’ll cover quality as an operational outcome, including consistency, reliability, and appropriateness of decisions, and how quality supervision can include reviewer feedback loops, complaint trend analysis, and monitoring for surprising outcome shifts. You’ll also learn how auditors test supervision effectiveness by checking whether supervision detects issues early, whether issues trigger action, and whether actions are documented and validated. By the end, you should be ready to answer exam scenarios by selecting the approach that supervises real-world impacts with measurable coverage and traceable response, not just technical performance. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on validating whether supervision actually covers fairness, safety, and quality impacts, because Domain 2D expects oversight to detect harm patterns that pure accuracy metrics can miss. You’ll learn how to define what “fairness” and “safety” mean in the organization’s context, then verify that supervision mechanisms measure those outcomes using segment reporting, sampling, and escalation criteria aligned to policy and risk appetite. We’ll cover quality as an operational outcome, including consistency, reliability, and appropriateness of decisions, and how quality supervision can include reviewer feedback loops, complaint trend analysis, and monitoring for surprising outcome shifts. You’ll also learn how auditors test supervision effectiveness by checking whether supervision detects issues early, whether issues trigger action, and whether actions are documented and validated. By the end, you should be ready to answer exam scenarios by selecting the approach that supervises real-world impacts with measurable coverage and traceable response, not just technical performance. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:10:54 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/84e9716c/bd95a81b.mp3" length="41412254" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1034</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on validating whether supervision actually covers fairness, safety, and quality impacts, because Domain 2D expects oversight to detect harm patterns that pure accuracy metrics can miss. You’ll learn how to define what “fairness” and “safety” mean in the organization’s context, then verify that supervision mechanisms measure those outcomes using segment reporting, sampling, and escalation criteria aligned to policy and risk appetite. We’ll cover quality as an operational outcome, including consistency, reliability, and appropriateness of decisions, and how quality supervision can include reviewer feedback loops, complaint trend analysis, and monitoring for surprising outcome shifts. You’ll also learn how auditors test supervision effectiveness by checking whether supervision detects issues early, whether issues trigger action, and whether actions are documented and validated. By the end, you should be ready to answer exam scenarios by selecting the approach that supervises real-world impacts with measurable coverage and traceable response, not just technical performance. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/84e9716c/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 77 — Test AI solutions for accuracy, robustness, bias, and safety (Domain 2E)</title>
      <itunes:episode>77</itunes:episode>
      <podcast:episode>77</podcast:episode>
      <itunes:title>Episode 77 — Test AI solutions for accuracy, robustness, bias, and safety (Domain 2E)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">713c30a6-2831-4b0d-8ac4-95dd257f8107</guid>
      <link>https://share.transistor.fm/s/b735b6ab</link>
      <description>
        <![CDATA[<p>This episode explains how to test AI solutions across four dimensions—accuracy, robustness, bias, and safety—because Domain 2E questions often require you to choose a test plan that reflects real operational risk. You’ll learn how accuracy testing confirms objective performance, robustness testing checks stability under noise and edge cases, bias testing evaluates unequal outcomes and proxy effects, and safety testing looks for harmful behaviors and failure modes that matter to stakeholders. We’ll cover how to document tests so they are auditable, including defined criteria, representative datasets, controlled scenarios, and repeatable methods that can be rerun after changes. You’ll also learn common exam traps, such as relying on a single metric, testing only in ideal lab conditions, or claiming safety is handled by policy without evidence. By the end, you should be able to select exam answers that build a balanced, evidence-driven testing approach tied to the use case and its decision impact. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains how to test AI solutions across four dimensions—accuracy, robustness, bias, and safety—because Domain 2E questions often require you to choose a test plan that reflects real operational risk. You’ll learn how accuracy testing confirms objective performance, robustness testing checks stability under noise and edge cases, bias testing evaluates unequal outcomes and proxy effects, and safety testing looks for harmful behaviors and failure modes that matter to stakeholders. We’ll cover how to document tests so they are auditable, including defined criteria, representative datasets, controlled scenarios, and repeatable methods that can be rerun after changes. You’ll also learn common exam traps, such as relying on a single metric, testing only in ideal lab conditions, or claiming safety is handled by policy without evidence. By the end, you should be able to select exam answers that build a balanced, evidence-driven testing approach tied to the use case and its decision impact. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:11:26 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/b735b6ab/864c659b.mp3" length="38297399" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>957</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains how to test AI solutions across four dimensions—accuracy, robustness, bias, and safety—because Domain 2E questions often require you to choose a test plan that reflects real operational risk. You’ll learn how accuracy testing confirms objective performance, robustness testing checks stability under noise and edge cases, bias testing evaluates unequal outcomes and proxy effects, and safety testing looks for harmful behaviors and failure modes that matter to stakeholders. We’ll cover how to document tests so they are auditable, including defined criteria, representative datasets, controlled scenarios, and repeatable methods that can be rerun after changes. You’ll also learn common exam traps, such as relying on a single metric, testing only in ideal lab conditions, or claiming safety is handled by policy without evidence. By the end, you should be able to select exam answers that build a balanced, evidence-driven testing approach tied to the use case and its decision impact. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b735b6ab/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 78 — Choose AI testing methods that match the risk of the use case (Domain 2E)</title>
      <itunes:episode>78</itunes:episode>
      <podcast:episode>78</podcast:episode>
      <itunes:title>Episode 78 — Choose AI testing methods that match the risk of the use case (Domain 2E)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2bc094a5-2314-4a15-82d4-4503ca77a4b4</guid>
      <link>https://share.transistor.fm/s/8012df17</link>
      <description>
        <![CDATA[<p>This episode teaches you how to choose testing methods that match use-case risk, because Domain 2E expects you to scale testing depth based on impact, not apply a one-size-fits-all checklist. You’ll learn how high-impact decisions demand deeper validation, broader scenario coverage, stronger segment analysis, and stricter acceptance thresholds, while lower-impact decisions can use lighter-weight testing with clear monitoring and escalation safeguards. We’ll cover method selection in practical terms, such as when to use holdout validation, stress and adversarial testing, out-of-distribution checks, human review sampling, and post-deployment shadow testing before full automation. You’ll also learn how to justify testing choices with governance language, linking methods to risk appetite, ethical constraints, privacy exposure, and the organization’s ability to supervise outcomes in production. By the end, you should be able to answer exam scenarios by selecting the testing approach that is proportional, auditable, and operationally realistic. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to choose testing methods that match use-case risk, because Domain 2E expects you to scale testing depth based on impact, not apply a one-size-fits-all checklist. You’ll learn how high-impact decisions demand deeper validation, broader scenario coverage, stronger segment analysis, and stricter acceptance thresholds, while lower-impact decisions can use lighter-weight testing with clear monitoring and escalation safeguards. We’ll cover method selection in practical terms, such as when to use holdout validation, stress and adversarial testing, out-of-distribution checks, human review sampling, and post-deployment shadow testing before full automation. You’ll also learn how to justify testing choices with governance language, linking methods to risk appetite, ethical constraints, privacy exposure, and the organization’s ability to supervise outcomes in production. By the end, you should be able to answer exam scenarios by selecting the testing approach that is proportional, auditable, and operationally realistic. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:11:49 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/8012df17/ffd381a7.mp3" length="41156242" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1028</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to choose testing methods that match use-case risk, because Domain 2E expects you to scale testing depth based on impact, not apply a one-size-fits-all checklist. You’ll learn how high-impact decisions demand deeper validation, broader scenario coverage, stronger segment analysis, and stricter acceptance thresholds, while lower-impact decisions can use lighter-weight testing with clear monitoring and escalation safeguards. We’ll cover method selection in practical terms, such as when to use holdout validation, stress and adversarial testing, out-of-distribution checks, human review sampling, and post-deployment shadow testing before full automation. You’ll also learn how to justify testing choices with governance language, linking methods to risk appetite, ethical constraints, privacy exposure, and the organization’s ability to supervise outcomes in production. By the end, you should be able to answer exam scenarios by selecting the testing approach that is proportional, auditable, and operationally realistic. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Episode 79 — Evaluate the design and effectiveness of AI-specific controls (Task 12)</title>
      <itunes:episode>79</itunes:episode>
      <podcast:episode>79</podcast:episode>
      <itunes:title>Episode 79 — Evaluate the design and effectiveness of AI-specific controls (Task 12)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ef0d2e63-800e-4798-8414-3c3d7c3a6b24</guid>
      <link>https://share.transistor.fm/s/03bb8d6f</link>
      <description>
        <![CDATA[<p>This episode focuses on evaluating the design and effectiveness of AI-specific controls, because Task 12 is about proving that controls exist for AI risks that traditional IT controls do not fully address. You’ll learn how to identify AI-specific controls across data governance, model validation, explainability requirements, drift monitoring, human oversight triggers, and change management that treats model updates as outcome-changing events. We’ll cover how to evaluate control design by checking whether each control addresses a defined risk, whether it has an owner, whether it can be performed consistently, and whether it produces evidence that can be sampled and verified. You’ll also learn how to evaluate effectiveness by looking for operational results: fewer harmful outcomes, timely escalations, consistent documentation, and changes to controls when monitoring reveals weakness. By the end, you should be able to choose exam answers that emphasize well-designed, testable controls tied to risk and evidence, not generic statements like “follow best practices.” Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on evaluating the design and effectiveness of AI-specific controls, because Task 12 is about proving that controls exist for AI risks that traditional IT controls do not fully address. You’ll learn how to identify AI-specific controls across data governance, model validation, explainability requirements, drift monitoring, human oversight triggers, and change management that treats model updates as outcome-changing events. We’ll cover how to evaluate control design by checking whether each control addresses a defined risk, whether it has an owner, whether it can be performed consistently, and whether it produces evidence that can be sampled and verified. You’ll also learn how to evaluate effectiveness by looking for operational results: fewer harmful outcomes, timely escalations, consistent documentation, and changes to controls when monitoring reveals weakness. By the end, you should be able to choose exam answers that emphasize well-designed, testable controls tied to risk and evidence, not generic statements like “follow best practices.” Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:12:19 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/03bb8d6f/4ebe9a26.mp3" length="43056907" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1076</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on evaluating the design and effectiveness of AI-specific controls, because Task 12 is about proving that controls exist for AI risks that traditional IT controls do not fully address. You’ll learn how to identify AI-specific controls across data governance, model validation, explainability requirements, drift monitoring, human oversight triggers, and change management that treats model updates as outcome-changing events. We’ll cover how to evaluate control design by checking whether each control addresses a defined risk, whether it has an owner, whether it can be performed consistently, and whether it produces evidence that can be sampled and verified. You’ll also learn how to evaluate effectiveness by looking for operational results: fewer harmful outcomes, timely escalations, consistent documentation, and changes to controls when monitoring reveals weakness. By the end, you should be able to choose exam answers that emphasize well-designed, testable controls tied to risk and evidence, not generic statements like “follow best practices.” Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/03bb8d6f/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 80 — Prove AI controls work over time, not only on launch day (Task 12)</title>
      <itunes:episode>80</itunes:episode>
      <podcast:episode>80</podcast:episode>
      <itunes:title>Episode 80 — Prove AI controls work over time, not only on launch day (Task 12)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">63f7ec04-2dfb-4a9d-bad5-3ead04d1a4be</guid>
      <link>https://share.transistor.fm/s/abaee9a1</link>
      <description>
        <![CDATA[<p>This episode teaches you how to prove AI controls work over time, because Task 12 often tests whether you can validate continuous control effectiveness in a world where data, models, and environments change. You’ll learn how controls degrade when monitoring is ignored, when ownership shifts, when data sources evolve, and when model updates happen without full validation and documentation. We’ll cover approaches to ongoing assurance, such as periodic control testing, sampling of decisions and reviewer outcomes, trend analysis on incidents and exceptions, and governance reviews that confirm metrics lead to corrective actions. You’ll also learn what evidence proves durability, including recurring reports, audit logs, follow-up validation after changes, and documented improvements based on lessons learned. By the end, you should be ready to answer exam scenarios by selecting the approach that demonstrates sustained control operation and accountability, rather than a one-time compliance effort at deployment. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to prove AI controls work over time, because Task 12 often tests whether you can validate continuous control effectiveness in a world where data, models, and environments change. You’ll learn how controls degrade when monitoring is ignored, when ownership shifts, when data sources evolve, and when model updates happen without full validation and documentation. We’ll cover approaches to ongoing assurance, such as periodic control testing, sampling of decisions and reviewer outcomes, trend analysis on incidents and exceptions, and governance reviews that confirm metrics lead to corrective actions. You’ll also learn what evidence proves durability, including recurring reports, audit logs, follow-up validation after changes, and documented improvements based on lessons learned. By the end, you should be ready to answer exam scenarios by selecting the approach that demonstrates sustained control operation and accountability, rather than a one-time compliance effort at deployment. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:12:49 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/abaee9a1/e56317e5.mp3" length="39665158" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>991</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to prove AI controls work over time, because Task 12 often tests whether you can validate continuous control effectiveness in a world where data, models, and environments change. You’ll learn how controls degrade when monitoring is ignored, when ownership shifts, when data sources evolve, and when model updates happen without full validation and documentation. We’ll cover approaches to ongoing assurance, such as periodic control testing, sampling of decisions and reviewer outcomes, trend analysis on incidents and exceptions, and governance reviews that confirm metrics lead to corrective actions. You’ll also learn what evidence proves durability, including recurring reports, audit logs, follow-up validation after changes, and documented improvements based on lessons learned. By the end, you should be ready to answer exam scenarios by selecting the approach that demonstrates sustained control operation and accountability, rather than a one-time compliance effort at deployment. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/abaee9a1/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 81 — Evaluate AI threats and vulnerabilities that do not exist in normal IT (Domain 2F)</title>
      <itunes:episode>81</itunes:episode>
      <podcast:episode>81</podcast:episode>
      <itunes:title>Episode 81 — Evaluate AI threats and vulnerabilities that do not exist in normal IT (Domain 2F)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">aa978bb5-ccf0-486a-8f97-2b07e5ee62a9</guid>
      <link>https://share.transistor.fm/s/12138267</link>
      <description>
        <![CDATA[<p>This episode explains AI-specific threats and vulnerabilities that go beyond normal IT risk, which matters for Domain 2F because AAIA expects you to recognize failure modes unique to models, data pipelines, and inference behavior. You’ll learn how threats shift from “break the server” to “break the decision,” including manipulation of inputs, abuse of model behavior, leakage of sensitive outputs, and attacks that degrade performance without obvious outages. We’ll cover how AI risk is introduced through training data, feature engineering, model interfaces, and monitoring gaps, and how traditional vulnerability scans may miss these weaknesses entirely. You’ll also learn what evidence auditors should look for, such as threat models that include AI abuse cases, controls that protect model artifacts and data integrity, and monitoring that detects suspicious inference patterns. By the end, you should be able to choose exam answers that treat AI security as outcome-protection with testable controls, not just a rebrand of standard IT hardening. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains AI-specific threats and vulnerabilities that go beyond normal IT risk, which matters for Domain 2F because AAIA expects you to recognize failure modes unique to models, data pipelines, and inference behavior. You’ll learn how threats shift from “break the server” to “break the decision,” including manipulation of inputs, abuse of model behavior, leakage of sensitive outputs, and attacks that degrade performance without obvious outages. We’ll cover how AI risk is introduced through training data, feature engineering, model interfaces, and monitoring gaps, and how traditional vulnerability scans may miss these weaknesses entirely. You’ll also learn what evidence auditors should look for, such as threat models that include AI abuse cases, controls that protect model artifacts and data integrity, and monitoring that detects suspicious inference patterns. By the end, you should be able to choose exam answers that treat AI security as outcome-protection with testable controls, not just a rebrand of standard IT hardening. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:13:12 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/12138267/ea9e995c.mp3" length="35640243" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>890</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains AI-specific threats and vulnerabilities that go beyond normal IT risk, which matters for Domain 2F because AAIA expects you to recognize failure modes unique to models, data pipelines, and inference behavior. You’ll learn how threats shift from “break the server” to “break the decision,” including manipulation of inputs, abuse of model behavior, leakage of sensitive outputs, and attacks that degrade performance without obvious outages. We’ll cover how AI risk is introduced through training data, feature engineering, model interfaces, and monitoring gaps, and how traditional vulnerability scans may miss these weaknesses entirely. You’ll also learn what evidence auditors should look for, such as threat models that include AI abuse cases, controls that protect model artifacts and data integrity, and monitoring that detects suspicious inference patterns. By the end, you should be able to choose exam answers that treat AI security as outcome-protection with testable controls, not just a rebrand of standard IT hardening. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/12138267/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 82 — Understand data poisoning, evasion, and model theft in plain language (Domain 2F)</title>
      <itunes:episode>82</itunes:episode>
      <podcast:episode>82</podcast:episode>
      <itunes:title>Episode 82 — Understand data poisoning, evasion, and model theft in plain language (Domain 2F)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c6997393-2be8-49b3-8546-1d7e566026b8</guid>
      <link>https://share.transistor.fm/s/0a412024</link>
      <description>
        <![CDATA[<p>This episode breaks down three high-yield AI attack categories—data poisoning, evasion, and model theft—in plain language so you can recognize them in AAIA scenarios and select realistic controls. You’ll learn how poisoning alters training data or labels so the model learns the wrong patterns, how evasion manipulates inputs at inference time to trick outputs without changing the model, and how model theft targets the model artifact or recreates it through repeated queries. We’ll connect each attack type to audit implications: what controls reduce exposure, what monitoring detects abnormal behavior, and what evidence proves the organization can respond. You’ll also learn common exam traps, such as confusing poisoning with drift, or treating model theft as just “data loss” without addressing API abuse and query logging. By the end, you should be able to match the threat to the right prevention and detection controls, expressed in auditable evidence terms. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode breaks down three high-yield AI attack categories—data poisoning, evasion, and model theft—in plain language so you can recognize them in AAIA scenarios and select realistic controls. You’ll learn how poisoning alters training data or labels so the model learns the wrong patterns, how evasion manipulates inputs at inference time to trick outputs without changing the model, and how model theft targets the model artifact or recreates it through repeated queries. We’ll connect each attack type to audit implications: what controls reduce exposure, what monitoring detects abnormal behavior, and what evidence proves the organization can respond. You’ll also learn common exam traps, such as confusing poisoning with drift, or treating model theft as just “data loss” without addressing API abuse and query logging. By the end, you should be able to match the threat to the right prevention and detection controls, expressed in auditable evidence terms. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:13:42 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/0a412024/a2ed21a5.mp3" length="34595343" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>864</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode breaks down three high-yield AI attack categories—data poisoning, evasion, and model theft—in plain language so you can recognize them in AAIA scenarios and select realistic controls. You’ll learn how poisoning alters training data or labels so the model learns the wrong patterns, how evasion manipulates inputs at inference time to trick outputs without changing the model, and how model theft targets the model artifact or recreates it through repeated queries. We’ll connect each attack type to audit implications: what controls reduce exposure, what monitoring detects abnormal behavior, and what evidence proves the organization can respond. You’ll also learn common exam traps, such as confusing poisoning with drift, or treating model theft as just “data loss” without addressing API abuse and query logging. By the end, you should be able to match the threat to the right prevention and detection controls, expressed in auditable evidence terms. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/0a412024/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 83 — Evaluate AI threat and vulnerability management programs for real coverage (Task 19)</title>
      <itunes:episode>83</itunes:episode>
      <podcast:episode>83</podcast:episode>
      <itunes:title>Episode 83 — Evaluate AI threat and vulnerability management programs for real coverage (Task 19)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">48e4a27a-c3da-4502-8fca-6a6eb778ac3a</guid>
      <link>https://share.transistor.fm/s/01cbf17a</link>
      <description>
        <![CDATA[<p>This episode teaches you how to evaluate whether an AI threat and vulnerability management program has real coverage, because Task 19 scenarios often describe “we have a program” while leaving model and data risks unaddressed. You’ll learn how to assess scope first: whether the program includes training pipelines, data stores, model registries, inference endpoints, prompt interfaces where applicable, and third-party components that influence outcomes. We’ll cover what “coverage” looks like beyond scanning, including threat modeling for AI abuse cases, secure design reviews for model interfaces, integrity controls for datasets, and monitoring for suspicious inference patterns. You’ll also learn what evidence proves the program operates, such as tracked findings, prioritized remediation tied to decision impact, change records showing fixes deployed, and repeat testing that confirms risks were reduced. By the end, you should be able to answer exam questions by selecting the option that expands traditional vulnerability management into AI-relevant controls and auditable assurance, not just reusing existing IT processes unchanged. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to evaluate whether an AI threat and vulnerability management program has real coverage, because Task 19 scenarios often describe “we have a program” while leaving model and data risks unaddressed. You’ll learn how to assess scope first: whether the program includes training pipelines, data stores, model registries, inference endpoints, prompt interfaces where applicable, and third-party components that influence outcomes. We’ll cover what “coverage” looks like beyond scanning, including threat modeling for AI abuse cases, secure design reviews for model interfaces, integrity controls for datasets, and monitoring for suspicious inference patterns. You’ll also learn what evidence proves the program operates, such as tracked findings, prioritized remediation tied to decision impact, change records showing fixes deployed, and repeat testing that confirms risks were reduced. By the end, you should be able to answer exam questions by selecting the option that expands traditional vulnerability management into AI-relevant controls and auditable assurance, not just reusing existing IT processes unchanged. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:14:21 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/01cbf17a/b0d94352.mp3" length="35248411" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>880</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to evaluate whether an AI threat and vulnerability management program has real coverage, because Task 19 scenarios often describe “we have a program” while leaving model and data risks unaddressed. You’ll learn how to assess scope first: whether the program includes training pipelines, data stores, model registries, inference endpoints, prompt interfaces where applicable, and third-party components that influence outcomes. We’ll cover what “coverage” looks like beyond scanning, including threat modeling for AI abuse cases, secure design reviews for model interfaces, integrity controls for datasets, and monitoring for suspicious inference patterns. You’ll also learn what evidence proves the program operates, such as tracked findings, prioritized remediation tied to decision impact, change records showing fixes deployed, and repeat testing that confirms risks were reduced. By the end, you should be able to answer exam questions by selecting the option that expands traditional vulnerability management into AI-relevant controls and auditable assurance, not just reusing existing IT processes unchanged. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Episode 84 — Build threat monitoring that catches abuse of models and prompts early (Task 19)</title>
      <itunes:episode>84</itunes:episode>
      <podcast:episode>84</podcast:episode>
      <itunes:title>Episode 84 — Build threat monitoring that catches abuse of models and prompts early (Task 19)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4a76b6ab-b353-4cbb-9f5d-2b167c39916d</guid>
      <link>https://share.transistor.fm/s/9964795a</link>
      <description>
        <![CDATA[<p>This episode focuses on threat monitoring that detects abuse of models and prompt interfaces early, because Task 19 expects monitoring to catch misuse patterns before they become data loss, harmful outputs, or operational incidents. You’ll learn what “abuse” looks like in logs and metrics, including abnormal query rates, unusual input patterns, repeated probing for sensitive outputs, attempts to bypass safeguards, and spikes in errors or timeouts that suggest automated attacks. We’ll cover how to design monitoring with clear thresholds and escalation paths, so alerts convert into action like rate limiting, access revocation, increased human review, rollback of a risky configuration, or incident response activation. You’ll also learn what evidence auditors need to see: defined monitoring objectives, documented alert rules, ownership for response, and records showing alerts were investigated and resolved. By the end, you should be able to choose exam answers that treat monitoring as a measurable control tied to abuse detection and accountable response, not just “we collect logs.” Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on threat monitoring that detects abuse of models and prompt interfaces early, because Task 19 expects monitoring to catch misuse patterns before they become data loss, harmful outputs, or operational incidents. You’ll learn what “abuse” looks like in logs and metrics, including abnormal query rates, unusual input patterns, repeated probing for sensitive outputs, attempts to bypass safeguards, and spikes in errors or timeouts that suggest automated attacks. We’ll cover how to design monitoring with clear thresholds and escalation paths, so alerts convert into action like rate limiting, access revocation, increased human review, rollback of a risky configuration, or incident response activation. You’ll also learn what evidence auditors need to see: defined monitoring objectives, documented alert rules, ownership for response, and records showing alerts were investigated and resolved. By the end, you should be able to choose exam answers that treat monitoring as a measurable control tied to abuse detection and accountable response, not just “we collect logs.” Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:14:42 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/9964795a/bfd1492f.mp3" length="33068746" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>826</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on threat monitoring that detects abuse of models and prompt interfaces early, because Task 19 expects monitoring to catch misuse patterns before they become data loss, harmful outputs, or operational incidents. You’ll learn what “abuse” looks like in logs and metrics, including abnormal query rates, unusual input patterns, repeated probing for sensitive outputs, attempts to bypass safeguards, and spikes in errors or timeouts that suggest automated attacks. We’ll cover how to design monitoring with clear thresholds and escalation paths, so alerts convert into action like rate limiting, access revocation, increased human review, rollback of a risky configuration, or incident response activation. You’ll also learn what evidence auditors need to see: defined monitoring objectives, documented alert rules, ownership for response, and records showing alerts were investigated and resolved. By the end, you should be able to choose exam answers that treat monitoring as a measurable control tied to abuse detection and accountable response, not just “we collect logs.” Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/9964795a/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 85 — Evaluate identity and access management for AI models, data, and keys (Task 16)</title>
      <itunes:episode>85</itunes:episode>
      <podcast:episode>85</podcast:episode>
      <itunes:title>Episode 85 — Evaluate identity and access management for AI models, data, and keys (Task 16)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ae3414c5-b101-42fc-86a0-028ce1ed8ab7</guid>
      <link>https://share.transistor.fm/s/d6770c39</link>
      <description>
        <![CDATA[<p>This episode teaches you how to evaluate identity and access management for AI systems, because Task 16 scenarios often test whether you protect the most sensitive assets: models, training data, and the keys and tokens that enable inference and integrations. You’ll learn to map identities across humans, service accounts, automation, and vendor access, then verify that each role has only the permissions needed to perform approved tasks. We’ll cover why access is more complex in AI, including separate access paths for datasets, labeling tools, model registries, deployment pipelines, and inference endpoints, plus secrets management for API keys and signing keys. You’ll also learn what evidence auditors expect, such as role definitions, access reviews, approval records for privileged access, key rotation practices, and logs that show access and changes are monitored. By the end, you should be able to answer exam questions by choosing IAM controls that preserve integrity, confidentiality, and accountability across the AI lifecycle, not just at the application layer. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to evaluate identity and access management for AI systems, because Task 16 scenarios often test whether you protect the most sensitive assets: models, training data, and the keys and tokens that enable inference and integrations. You’ll learn to map identities across humans, service accounts, automation, and vendor access, then verify that each role has only the permissions needed to perform approved tasks. We’ll cover why access is more complex in AI, including separate access paths for datasets, labeling tools, model registries, deployment pipelines, and inference endpoints, plus secrets management for API keys and signing keys. You’ll also learn what evidence auditors expect, such as role definitions, access reviews, approval records for privileged access, key rotation practices, and logs that show access and changes are monitored. By the end, you should be able to answer exam questions by choosing IAM controls that preserve integrity, confidentiality, and accountability across the AI lifecycle, not just at the application layer. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:15:15 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/d6770c39/00b0de4f.mp3" length="33700907" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>842</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to evaluate identity and access management for AI systems, because Task 16 scenarios often test whether you protect the most sensitive assets: models, training data, and the keys and tokens that enable inference and integrations. You’ll learn to map identities across humans, service accounts, automation, and vendor access, then verify that each role has only the permissions needed to perform approved tasks. We’ll cover why access is more complex in AI, including separate access paths for datasets, labeling tools, model registries, deployment pipelines, and inference endpoints, plus secrets management for API keys and signing keys. You’ll also learn what evidence auditors expect, such as role definitions, access reviews, approval records for privileged access, key rotation practices, and logs that show access and changes are monitored. By the end, you should be able to answer exam questions by choosing IAM controls that preserve integrity, confidentiality, and accountability across the AI lifecycle, not just at the application layer. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/d6770c39/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 86 — Audit least privilege for pipelines, service accounts, and model endpoints (Task 16)</title>
      <itunes:episode>86</itunes:episode>
      <podcast:episode>86</podcast:episode>
      <itunes:title>Episode 86 — Audit least privilege for pipelines, service accounts, and model endpoints (Task 16)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fa8a3b11-6bbc-432e-82d5-11ee6cd64595</guid>
      <link>https://share.transistor.fm/s/fb8af3b7</link>
      <description>
        <![CDATA[<p>This episode focuses on auditing least privilege in the places where AI systems most often break it: pipelines, service accounts, and model endpoints. You’ll learn how “too much access” creates unique AI risk, such as unauthorized dataset changes, silent model swaps, tampering with thresholds, or abuse of inference APIs to extract sensitive behavior and outputs. We’ll cover how to test least privilege by examining role design, permission scopes, separation between development and production, and whether service accounts are tightly constrained with short-lived credentials and strong logging. You’ll also learn practical audit steps, such as sampling recent pipeline runs and deployments to verify approvals, checking endpoint policies for rate limits and authentication strength, and validating that privileged actions generate alerts and are reviewed. By the end, you should be able to choose AAIA answers that enforce least privilege with measurable controls and evidence, rather than assuming “we use RBAC” automatically means access is safe. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on auditing least privilege in the places where AI systems most often break it: pipelines, service accounts, and model endpoints. You’ll learn how “too much access” creates unique AI risk, such as unauthorized dataset changes, silent model swaps, tampering with thresholds, or abuse of inference APIs to extract sensitive behavior and outputs. We’ll cover how to test least privilege by examining role design, permission scopes, separation between development and production, and whether service accounts are tightly constrained with short-lived credentials and strong logging. You’ll also learn practical audit steps, such as sampling recent pipeline runs and deployments to verify approvals, checking endpoint policies for rate limits and authentication strength, and validating that privileged actions generate alerts and are reviewed. By the end, you should be able to choose AAIA answers that enforce least privilege with measurable controls and evidence, rather than assuming “we use RBAC” automatically means access is safe. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:15:54 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/fb8af3b7/9f7f18f4.mp3" length="31688443" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>791</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on auditing least privilege in the places where AI systems most often break it: pipelines, service accounts, and model endpoints. You’ll learn how “too much access” creates unique AI risk, such as unauthorized dataset changes, silent model swaps, tampering with thresholds, or abuse of inference APIs to extract sensitive behavior and outputs. We’ll cover how to test least privilege by examining role design, permission scopes, separation between development and production, and whether service accounts are tightly constrained with short-lived credentials and strong logging. You’ll also learn practical audit steps, such as sampling recent pipeline runs and deployments to verify approvals, checking endpoint policies for rate limits and authentication strength, and validating that privileged actions generate alerts and are reviewed. By the end, you should be able to choose AAIA answers that enforce least privilege with measurable controls and evidence, rather than assuming “we use RBAC” automatically means access is safe. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/fb8af3b7/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 87 — Evaluate AI vendors and supply chain controls where your visibility ends (Task 10)</title>
      <itunes:episode>87</itunes:episode>
      <podcast:episode>87</podcast:episode>
      <itunes:title>Episode 87 — Evaluate AI vendors and supply chain controls where your visibility ends (Task 10)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4f918526-5b2f-417e-ab53-abe3f25af5bb</guid>
      <link>https://share.transistor.fm/s/1f600bad</link>
      <description>
        <![CDATA[<p>This episode explains how to evaluate AI vendors and supply chain controls when your visibility ends at the contract boundary, because Task 10 often tests whether you can demand accountability and evidence without assuming you can “audit the vendor’s code.” You’ll learn how to assess vendor risk by focusing on what the vendor provides—models, data, tooling, hosting, or APIs—and what that means for data handling, model behavior, monitoring responsibilities, and incident response. We’ll cover practical controls such as due diligence questionnaires tailored to AI, defined security and privacy obligations, audit rights where feasible, clear service-level commitments, and requirements for transparency on model updates that change outcomes. You’ll also learn how to evaluate integration risk, including how keys are managed, how logs are shared, and how the organization supervises outputs when the model is effectively a black box. By the end, you should be able to choose exam answers that reduce vendor risk through enforceable controls and evidence, not through trust or vague “vendor assurance.” Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains how to evaluate AI vendors and supply chain controls when your visibility ends at the contract boundary, because Task 10 often tests whether you can demand accountability and evidence without assuming you can “audit the vendor’s code.” You’ll learn how to assess vendor risk by focusing on what the vendor provides—models, data, tooling, hosting, or APIs—and what that means for data handling, model behavior, monitoring responsibilities, and incident response. We’ll cover practical controls such as due diligence questionnaires tailored to AI, defined security and privacy obligations, audit rights where feasible, clear service-level commitments, and requirements for transparency on model updates that change outcomes. You’ll also learn how to evaluate integration risk, including how keys are managed, how logs are shared, and how the organization supervises outputs when the model is effectively a black box. By the end, you should be able to choose exam answers that reduce vendor risk through enforceable controls and evidence, not through trust or vague “vendor assurance.” Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:16:19 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/1f600bad/f2edd45d.mp3" length="29695819" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>742</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains how to evaluate AI vendors and supply chain controls when your visibility ends at the contract boundary, because Task 10 often tests whether you can demand accountability and evidence without assuming you can “audit the vendor’s code.” You’ll learn how to assess vendor risk by focusing on what the vendor provides—models, data, tooling, hosting, or APIs—and what that means for data handling, model behavior, monitoring responsibilities, and incident response. We’ll cover practical controls such as due diligence questionnaires tailored to AI, defined security and privacy obligations, audit rights where feasible, clear service-level commitments, and requirements for transparency on model updates that change outcomes. You’ll also learn how to evaluate integration risk, including how keys are managed, how logs are shared, and how the organization supervises outputs when the model is effectively a black box. By the end, you should be able to choose exam answers that reduce vendor risk through enforceable controls and evidence, not through trust or vague “vendor assurance.” Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/1f600bad/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 88 — Audit AI vendor claims, contracts, and control evidence without getting sold (Task 10)</title>
      <itunes:episode>88</itunes:episode>
      <podcast:episode>88</podcast:episode>
      <itunes:title>Episode 88 — Audit AI vendor claims, contracts, and control evidence without getting sold (Task 10)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4758a957-36b6-4560-8d86-6883f303e2c2</guid>
      <link>https://share.transistor.fm/s/2f81630b</link>
      <description>
        <![CDATA[<p>This episode teaches you how to audit AI vendor claims, contracts, and control evidence without getting sold by polished marketing metrics and generic security statements. You’ll learn how to challenge claims like “fair,” “transparent,” “secure,” and “state-of-the-art” by asking for definitions, test methods, limitations, and what the vendor will do when outcomes cause harm or compliance exposure. We’ll cover contract terms that matter for AAIA scenarios, including data ownership and allowed use, retention and deletion, breach and incident notification, model update notice, availability commitments, audit rights, and responsibility splits for monitoring and human review. You’ll also learn how to evaluate vendor evidence, such as independent assessments, security documentation, validation reports, and operational runbooks, while recognizing what evidence is necessary versus merely impressive. By the end, you should be able to answer exam questions by choosing the option that converts vendor promises into enforceable obligations and auditable evidence, rather than accepting assurances at face value. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to audit AI vendor claims, contracts, and control evidence without getting sold by polished marketing metrics and generic security statements. You’ll learn how to challenge claims like “fair,” “transparent,” “secure,” and “state-of-the-art” by asking for definitions, test methods, limitations, and what the vendor will do when outcomes cause harm or compliance exposure. We’ll cover contract terms that matter for AAIA scenarios, including data ownership and allowed use, retention and deletion, breach and incident notification, model update notice, availability commitments, audit rights, and responsibility splits for monitoring and human review. You’ll also learn how to evaluate vendor evidence, such as independent assessments, security documentation, validation reports, and operational runbooks, while recognizing what evidence is necessary versus merely impressive. By the end, you should be able to answer exam questions by choosing the option that converts vendor promises into enforceable obligations and auditable evidence, rather than accepting assurances at face value. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:17:25 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/2f81630b/2c3cbfe7.mp3" length="30352023" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>758</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to audit AI vendor claims, contracts, and control evidence without getting sold by polished marketing metrics and generic security statements. You’ll learn how to challenge claims like “fair,” “transparent,” “secure,” and “state-of-the-art” by asking for definitions, test methods, limitations, and what the vendor will do when outcomes cause harm or compliance exposure. We’ll cover contract terms that matter for AAIA scenarios, including data ownership and allowed use, retention and deletion, breach and incident notification, model update notice, availability commitments, audit rights, and responsibility splits for monitoring and human review. You’ll also learn how to evaluate vendor evidence, such as independent assessments, security documentation, validation reports, and operational runbooks, while recognizing what evidence is necessary versus merely impressive. By the end, you should be able to answer exam questions by choosing the option that converts vendor promises into enforceable obligations and auditable evidence, rather than accepting assurances at face value. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/2f81630b/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 89 — Evaluate AI problem and incident management programs for fast containment (Task 20)</title>
      <itunes:episode>89</itunes:episode>
      <podcast:episode>89</podcast:episode>
      <itunes:title>Episode 89 — Evaluate AI problem and incident management programs for fast containment (Task 20)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c5aa95aa-be11-428c-ad88-b4eb00460be7</guid>
      <link>https://share.transistor.fm/s/6dc98e45</link>
      <description>
        <![CDATA[<p>This episode focuses on evaluating AI problem and incident management programs with an emphasis on fast containment, because Task 20 scenarios often involve harmful outputs, drift-driven failures, or abuse patterns that require immediate action. You’ll learn how AI incidents differ from typical IT incidents, including the need to stop harmful decisions quickly, preserve evidence about model versions and data states, and communicate clearly about decision impact and stakeholder harm. We’ll cover what strong programs include: defined incident types for AI, clear severity criteria tied to decision impact, containment options like rollback, disabling automation, tightening thresholds, and increasing human review, and a path from incident response into problem management so root causes are addressed. You’ll also learn the evidence auditors expect, such as incident runbooks, escalation records, post-incident reviews, and tracked corrective actions that prevent repeat failures. By the end, you should be able to choose exam answers that prioritize containment and accountability with traceable evidence, not slow investigations while harm continues. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on evaluating AI problem and incident management programs with an emphasis on fast containment, because Task 20 scenarios often involve harmful outputs, drift-driven failures, or abuse patterns that require immediate action. You’ll learn how AI incidents differ from typical IT incidents, including the need to stop harmful decisions quickly, preserve evidence about model versions and data states, and communicate clearly about decision impact and stakeholder harm. We’ll cover what strong programs include: defined incident types for AI, clear severity criteria tied to decision impact, containment options like rollback, disabling automation, tightening thresholds, and increasing human review, and a path from incident response into problem management so root causes are addressed. You’ll also learn the evidence auditors expect, such as incident runbooks, escalation records, post-incident reviews, and tracked corrective actions that prevent repeat failures. By the end, you should be able to choose exam answers that prioritize containment and accountability with traceable evidence, not slow investigations while harm continues. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:18:06 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/6dc98e45/c5e12b1c.mp3" length="31834727" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>795</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on evaluating AI problem and incident management programs with an emphasis on fast containment, because Task 20 scenarios often involve harmful outputs, drift-driven failures, or abuse patterns that require immediate action. You’ll learn how AI incidents differ from typical IT incidents, including the need to stop harmful decisions quickly, preserve evidence about model versions and data states, and communicate clearly about decision impact and stakeholder harm. We’ll cover what strong programs include: defined incident types for AI, clear severity criteria tied to decision impact, containment options like rollback, disabling automation, tightening thresholds, and increasing human review, and a path from incident response into problem management so root causes are addressed. You’ll also learn the evidence auditors expect, such as incident runbooks, escalation records, post-incident reviews, and tracked corrective actions that prevent repeat failures. By the end, you should be able to choose exam answers that prioritize containment and accountability with traceable evidence, not slow investigations while harm continues. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/6dc98e45/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 90 — Run AI incident response: detect, triage, contain, recover, and learn (Domain 2G)</title>
      <itunes:episode>90</itunes:episode>
      <podcast:episode>90</podcast:episode>
      <itunes:title>Episode 90 — Run AI incident response: detect, triage, contain, recover, and learn (Domain 2G)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4484c48b-c2f9-4693-8823-81a42abf5d43</guid>
      <link>https://share.transistor.fm/s/d1ed785c</link>
      <description>
        <![CDATA[<p>This episode walks through AI incident response as a complete lifecycle—detect, triage, contain, recover, and learn—because Domain 2G expects you to treat AI incidents as operational events with governance consequences and evidence requirements. You’ll learn how detection relies on monitoring, supervision, and stakeholder feedback, and how triage should quickly identify decision impact, affected populations, and whether the cause is drift, data issues, abuse, or a recent change. We’ll cover containment actions that reduce harm immediately, such as pausing automation, rolling back to a known-good model version, restricting access, and tightening review triggers, while preserving evidence like model versions, configuration states, and relevant logs. Recovery will include controlled remediation, re-validation, and careful reintroduction of automation, followed by learning activities like post-incident reviews, control improvements, and updates to runbooks and training. By the end, you should be able to answer exam scenarios by selecting the response that protects stakeholders, preserves accountability, and improves control resilience over time. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode walks through AI incident response as a complete lifecycle—detect, triage, contain, recover, and learn—because Domain 2G expects you to treat AI incidents as operational events with governance consequences and evidence requirements. You’ll learn how detection relies on monitoring, supervision, and stakeholder feedback, and how triage should quickly identify decision impact, affected populations, and whether the cause is drift, data issues, abuse, or a recent change. We’ll cover containment actions that reduce harm immediately, such as pausing automation, rolling back to a known-good model version, restricting access, and tightening review triggers, while preserving evidence like model versions, configuration states, and relevant logs. Recovery will include controlled remediation, re-validation, and careful reintroduction of automation, followed by learning activities like post-incident reviews, control improvements, and updates to runbooks and training. By the end, you should be able to answer exam scenarios by selecting the response that protects stakeholders, preserves accountability, and improves control resilience over time. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:18:36 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/d1ed785c/8c23f70e.mp3" length="33011278" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>824</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode walks through AI incident response as a complete lifecycle—detect, triage, contain, recover, and learn—because Domain 2G expects you to treat AI incidents as operational events with governance consequences and evidence requirements. You’ll learn how detection relies on monitoring, supervision, and stakeholder feedback, and how triage should quickly identify decision impact, affected populations, and whether the cause is drift, data issues, abuse, or a recent change. We’ll cover containment actions that reduce harm immediately, such as pausing automation, rolling back to a known-good model version, restricting access, and tightening review triggers, while preserving evidence like model versions, configuration states, and relevant logs. Recovery will include controlled remediation, re-validation, and careful reintroduction of automation, followed by learning activities like post-incident reviews, control improvements, and updates to runbooks and training. By the end, you should be able to answer exam scenarios by selecting the response that protects stakeholders, preserves accountability, and improves control resilience over time. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/d1ed785c/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 91 — Spaced Retrieval Review: Domain 2 operations and controls, simplified (Review: Domain 2)</title>
      <itunes:episode>91</itunes:episode>
      <podcast:episode>91</podcast:episode>
      <itunes:title>Episode 91 — Spaced Retrieval Review: Domain 2 operations and controls, simplified (Review: Domain 2)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d7d7fee8-4153-416e-b881-d3fd2168a7f7</guid>
      <link>https://share.transistor.fm/s/bb3ed0cf</link>
      <description>
        <![CDATA[<p>This review episode reinforces Domain 2 by pulling operations and controls into a compact, easy-to-recall mental model that matches how AAIA questions are written. You’ll revisit how data pipelines, development practices, deployment gates, monitoring, supervision, and security controls fit together, with quick reminders of what “good evidence” looks like for each area. We’ll refresh the control themes that show up repeatedly in scenarios, including versioning and reproducibility, access control for model artifacts and datasets, change management that treats updates as outcome-changing events, and supervision that detects harmful decisions before stakeholders do. You’ll also practice the exam pattern of turning a scenario into: risk statement, control intent, and evidence path, so you can eliminate distractors that sound technical but do not prove anything. By the end, Domain 2 should feel like an operational control story you can explain and defend, not a pile of disconnected terms. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This review episode reinforces Domain 2 by pulling operations and controls into a compact, easy-to-recall mental model that matches how AAIA questions are written. You’ll revisit how data pipelines, development practices, deployment gates, monitoring, supervision, and security controls fit together, with quick reminders of what “good evidence” looks like for each area. We’ll refresh the control themes that show up repeatedly in scenarios, including versioning and reproducibility, access control for model artifacts and datasets, change management that treats updates as outcome-changing events, and supervision that detects harmful decisions before stakeholders do. You’ll also practice the exam pattern of turning a scenario into: risk statement, control intent, and evidence path, so you can eliminate distractors that sound technical but do not prove anything. By the end, Domain 2 should feel like an operational control story you can explain and defend, not a pile of disconnected terms. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:19:02 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/bb3ed0cf/77f6101d.mp3" length="37319406" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>932</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This review episode reinforces Domain 2 by pulling operations and controls into a compact, easy-to-recall mental model that matches how AAIA questions are written. You’ll revisit how data pipelines, development practices, deployment gates, monitoring, supervision, and security controls fit together, with quick reminders of what “good evidence” looks like for each area. We’ll refresh the control themes that show up repeatedly in scenarios, including versioning and reproducibility, access control for model artifacts and datasets, change management that treats updates as outcome-changing events, and supervision that detects harmful decisions before stakeholders do. You’ll also practice the exam pattern of turning a scenario into: risk statement, control intent, and evidence path, so you can eliminate distractors that sound technical but do not prove anything. By the end, Domain 2 should feel like an operational control story you can explain and defend, not a pile of disconnected terms. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/bb3ed0cf/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 92 — Plan an AI audit: scope, criteria, stakeholders, and timing choices (Domain 3A)</title>
      <itunes:episode>92</itunes:episode>
      <podcast:episode>92</podcast:episode>
      <itunes:title>Episode 92 — Plan an AI audit: scope, criteria, stakeholders, and timing choices (Domain 3A)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4133d57a-b80b-4f6e-ac7c-b09ac8da593c</guid>
      <link>https://share.transistor.fm/s/48eeb31a</link>
      <description>
        <![CDATA[<p>This episode explains how to plan an AI audit in a way that produces a workable scope, clear criteria, the right stakeholders, and timing that fits the AI lifecycle. You’ll learn how to define scope by anchoring on the business decision the AI influences, the impacted systems and data flows, and the most meaningful risks, rather than scoping only to “the model.” We’ll cover criteria selection at a planning level, including how policies, regulations, standards, and internal risk appetite become audit criteria that can be tested with evidence. Stakeholder planning will focus on practical ownership: who owns the decision, who owns the model and data, who operates monitoring, and who has authority to accept risk or halt automation. Timing choices will include when to audit pre-deployment versus post-deployment, how to account for ongoing updates, and how to plan around retraining cycles and release windows so results are relevant. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains how to plan an AI audit in a way that produces a workable scope, clear criteria, the right stakeholders, and timing that fits the AI lifecycle. You’ll learn how to define scope by anchoring on the business decision the AI influences, the impacted systems and data flows, and the most meaningful risks, rather than scoping only to “the model.” We’ll cover criteria selection at a planning level, including how policies, regulations, standards, and internal risk appetite become audit criteria that can be tested with evidence. Stakeholder planning will focus on practical ownership: who owns the decision, who owns the model and data, who operates monitoring, and who has authority to accept risk or halt automation. Timing choices will include when to audit pre-deployment versus post-deployment, how to account for ongoing updates, and how to plan around retraining cycles and release windows so results are relevant. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:19:42 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/48eeb31a/aabce1e5.mp3" length="39639062" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>990</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains how to plan an AI audit in a way that produces a workable scope, clear criteria, the right stakeholders, and timing that fits the AI lifecycle. You’ll learn how to define scope by anchoring on the business decision the AI influences, the impacted systems and data flows, and the most meaningful risks, rather than scoping only to “the model.” We’ll cover criteria selection at a planning level, including how policies, regulations, standards, and internal risk appetite become audit criteria that can be tested with evidence. Stakeholder planning will focus on practical ownership: who owns the decision, who owns the model and data, who operates monitoring, and who has authority to accept risk or halt automation. Timing choices will include when to audit pre-deployment versus post-deployment, how to account for ongoing updates, and how to plan around retraining cycles and release windows so results are relevant. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/48eeb31a/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 93 — Build AI audit objectives that connect directly to business risk (Domain 3A)</title>
      <itunes:episode>93</itunes:episode>
      <podcast:episode>93</podcast:episode>
      <itunes:title>Episode 93 — Build AI audit objectives that connect directly to business risk (Domain 3A)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">090224e7-8682-4dd7-be96-be8294439f66</guid>
      <link>https://share.transistor.fm/s/b412c6d0</link>
      <description>
        <![CDATA[<p>This episode teaches you how to build audit objectives that connect directly to business risk, because AAIA scenarios often test whether you can write objectives that are meaningful and testable instead of generic. You’ll learn to express objectives in terms of what must be true for the AI use case to be acceptable, such as decisions being accurate enough for the purpose, fair within defined thresholds, compliant with privacy and policy constraints, and supervised with escalation paths that prevent ongoing harm. We’ll cover how to tie objectives to risk drivers like data quality, drift, third-party dependencies, and human oversight capacity, then translate each objective into the kinds of evidence you would expect to validate it. You’ll also learn how to avoid audit objectives that are too broad to test, or too technical to matter, by keeping the focus on outcomes and control intent. By the end, you should be able to read a scenario and choose the objective set that would produce a defensible audit conclusion aligned to business impact. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to build audit objectives that connect directly to business risk, because AAIA scenarios often test whether you can write objectives that are meaningful and testable instead of generic. You’ll learn to express objectives in terms of what must be true for the AI use case to be acceptable, such as decisions being accurate enough for the purpose, fair within defined thresholds, compliant with privacy and policy constraints, and supervised with escalation paths that prevent ongoing harm. We’ll cover how to tie objectives to risk drivers like data quality, drift, third-party dependencies, and human oversight capacity, then translate each objective into the kinds of evidence you would expect to validate it. You’ll also learn how to avoid audit objectives that are too broad to test, or too technical to matter, by keeping the focus on outcomes and control intent. By the end, you should be able to read a scenario and choose the objective set that would produce a defensible audit conclusion aligned to business impact. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:20:13 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/b412c6d0/6a9cb4fd.mp3" length="39582631" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>989</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to build audit objectives that connect directly to business risk, because AAIA scenarios often test whether you can write objectives that are meaningful and testable instead of generic. You’ll learn to express objectives in terms of what must be true for the AI use case to be acceptable, such as decisions being accurate enough for the purpose, fair within defined thresholds, compliant with privacy and policy constraints, and supervised with escalation paths that prevent ongoing harm. We’ll cover how to tie objectives to risk drivers like data quality, drift, third-party dependencies, and human oversight capacity, then translate each objective into the kinds of evidence you would expect to validate it. You’ll also learn how to avoid audit objectives that are too broad to test, or too technical to matter, by keeping the focus on outcomes and control intent. By the end, you should be able to read a scenario and choose the objective set that would produce a defensible audit conclusion aligned to business impact. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b412c6d0/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 94 — Choose audit criteria for AI using policy, risk, and outcomes (Domain 3A)</title>
      <itunes:episode>94</itunes:episode>
      <podcast:episode>94</podcast:episode>
      <itunes:title>Episode 94 — Choose audit criteria for AI using policy, risk, and outcomes (Domain 3A)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">49390dd3-3b9a-48a7-ad9f-f132200564b0</guid>
      <link>https://share.transistor.fm/s/9d556f55</link>
      <description>
        <![CDATA[<p>This episode explains how to choose audit criteria for AI by using policy, risk, and outcomes, because AAIA expects you to build criteria that can be proven with evidence, not just referenced as “best practice.” You’ll learn how internal policies and procedures become criteria when they include roles, required steps, thresholds, approvals, and recordkeeping expectations. We’ll cover how risk appetite and decision impact shape criteria depth, such as stricter criteria for high-impact decisions that require stronger validation, monitoring, and human review triggers. Outcomes-based criteria will focus on what the organization must demonstrate in production, including stable performance, controlled drift response, fairness monitoring where applicable, and effective complaint and incident handling. You’ll also learn how to handle ambiguous criteria by looking for documented interpretations, approved standards mappings, and consistent enforcement across teams, rather than inventing requirements on the fly. By the end, you should be able to pick exam answers that define criteria in a way that is measurable, defensible, and aligned to the scenario’s real risk. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains how to choose audit criteria for AI by using policy, risk, and outcomes, because AAIA expects you to build criteria that can be proven with evidence, not just referenced as “best practice.” You’ll learn how internal policies and procedures become criteria when they include roles, required steps, thresholds, approvals, and recordkeeping expectations. We’ll cover how risk appetite and decision impact shape criteria depth, such as stricter criteria for high-impact decisions that require stronger validation, monitoring, and human review triggers. Outcomes-based criteria will focus on what the organization must demonstrate in production, including stable performance, controlled drift response, fairness monitoring where applicable, and effective complaint and incident handling. You’ll also learn how to handle ambiguous criteria by looking for documented interpretations, approved standards mappings, and consistent enforcement across teams, rather than inventing requirements on the fly. By the end, you should be able to pick exam answers that define criteria in a way that is measurable, defensible, and aligned to the scenario’s real risk. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:20:47 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/9d556f55/2ec7a6c9.mp3" length="34425009" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>860</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains how to choose audit criteria for AI by using policy, risk, and outcomes, because AAIA expects you to build criteria that can be proven with evidence, not just referenced as “best practice.” You’ll learn how internal policies and procedures become criteria when they include roles, required steps, thresholds, approvals, and recordkeeping expectations. We’ll cover how risk appetite and decision impact shape criteria depth, such as stricter criteria for high-impact decisions that require stronger validation, monitoring, and human review triggers. Outcomes-based criteria will focus on what the organization must demonstrate in production, including stable performance, controlled drift response, fairness monitoring where applicable, and effective complaint and incident handling. You’ll also learn how to handle ambiguous criteria by looking for documented interpretations, approved standards mappings, and consistent enforcement across teams, rather than inventing requirements on the fly. By the end, you should be able to pick exam answers that define criteria in a way that is measurable, defensible, and aligned to the scenario’s real risk. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/9d556f55/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 95 — Use audit techniques tailored to AI systems, not generic checklists (Domain 3B)</title>
      <itunes:episode>95</itunes:episode>
      <podcast:episode>95</podcast:episode>
      <itunes:title>Episode 95 — Use audit techniques tailored to AI systems, not generic checklists (Domain 3B)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3f7fd5c3-c3e6-401b-88fe-8261ec6a8951</guid>
      <link>https://share.transistor.fm/s/b4a641a9</link>
      <description>
        <![CDATA[<p>This episode teaches audit techniques that are tailored to AI systems, because Domain 3B often tests whether you can select methods that match AI realities like data dependence, model updates, and outcome supervision. You’ll learn how to combine walkthroughs of data and decision flows with targeted control testing, including verifying approval gates, validating versioning and reproducibility, and checking that monitoring triggers actually lead to action. We’ll cover technique choices like inspecting lineage and change records, sampling outputs and reviewer decisions, testing exception handling and escalation paths, and evaluating whether governance decisions are recorded and followed through. You’ll also learn why generic checklist audits fail in AI contexts, especially when they ignore drift, proxy bias, vendor black boxes, or the difference between lab validation and production behavior. By the end, you should be able to choose exam answers that apply AI-aware audit techniques to produce evidence-backed conclusions rather than superficial compliance statements. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches audit techniques that are tailored to AI systems, because Domain 3B often tests whether you can select methods that match AI realities like data dependence, model updates, and outcome supervision. You’ll learn how to combine walkthroughs of data and decision flows with targeted control testing, including verifying approval gates, validating versioning and reproducibility, and checking that monitoring triggers actually lead to action. We’ll cover technique choices like inspecting lineage and change records, sampling outputs and reviewer decisions, testing exception handling and escalation paths, and evaluating whether governance decisions are recorded and followed through. You’ll also learn why generic checklist audits fail in AI contexts, especially when they ignore drift, proxy bias, vendor black boxes, or the difference between lab validation and production behavior. By the end, you should be able to choose exam answers that apply AI-aware audit techniques to produce evidence-backed conclusions rather than superficial compliance statements. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:21:15 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/b4a641a9/82f6d73e.mp3" length="34924482" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>872</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches audit techniques that are tailored to AI systems, because Domain 3B often tests whether you can select methods that match AI realities like data dependence, model updates, and outcome supervision. You’ll learn how to combine walkthroughs of data and decision flows with targeted control testing, including verifying approval gates, validating versioning and reproducibility, and checking that monitoring triggers actually lead to action. We’ll cover technique choices like inspecting lineage and change records, sampling outputs and reviewer decisions, testing exception handling and escalation paths, and evaluating whether governance decisions are recorded and followed through. You’ll also learn why generic checklist audits fail in AI contexts, especially when they ignore drift, proxy bias, vendor black boxes, or the difference between lab validation and production behavior. By the end, you should be able to choose exam answers that apply AI-aware audit techniques to produce evidence-backed conclusions rather than superficial compliance statements. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b4a641a9/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 96 — Design sampling for AI decisions that reveals bias and failure modes (Domain 3B)</title>
      <itunes:episode>96</itunes:episode>
      <podcast:episode>96</podcast:episode>
      <itunes:title>Episode 96 — Design sampling for AI decisions that reveals bias and failure modes (Domain 3B)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f8b8d65b-d1d4-472b-ba14-f405757f6612</guid>
      <link>https://share.transistor.fm/s/b8b3d49e</link>
      <description>
        <![CDATA[<p>This episode focuses on designing sampling approaches that reveal bias and failure modes in AI decisions, because AAIA questions often ask what sampling plan best supports a defensible conclusion. You’ll learn how to sample across time, segments, and decision types so you can detect drift, representation gaps, and inconsistent outcomes that hide inside averages. We’ll cover how to choose samples that reflect decision impact, including oversampling edge cases, high-risk categories, and scenarios that historically produce complaints or manual overrides. You’ll also learn how to tie sampling to criteria, such as fairness thresholds, policy boundaries, and escalation requirements, so the sample proves whether controls operate as intended. Practical considerations will include ensuring your sample can be traced to logs, model versions, and data states, so results are reproducible and not disputed as “from a different model.” By the end, you should be able to choose exam answers that use sampling as a detection tool for real-world harm, not just as a box-checking method. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on designing sampling approaches that reveal bias and failure modes in AI decisions, because AAIA questions often ask what sampling plan best supports a defensible conclusion. You’ll learn how to sample across time, segments, and decision types so you can detect drift, representation gaps, and inconsistent outcomes that hide inside averages. We’ll cover how to choose samples that reflect decision impact, including oversampling edge cases, high-risk categories, and scenarios that historically produce complaints or manual overrides. You’ll also learn how to tie sampling to criteria, such as fairness thresholds, policy boundaries, and escalation requirements, so the sample proves whether controls operate as intended. Practical considerations will include ensuring your sample can be traced to logs, model versions, and data states, so results are reproducible and not disputed as “from a different model.” By the end, you should be able to choose exam answers that use sampling as a detection tool for real-world harm, not just as a box-checking method. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:21:38 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/b8b3d49e/d90eca9f.mp3" length="35975652" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>899</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on designing sampling approaches that reveal bias and failure modes in AI decisions, because AAIA questions often ask what sampling plan best supports a defensible conclusion. You’ll learn how to sample across time, segments, and decision types so you can detect drift, representation gaps, and inconsistent outcomes that hide inside averages. We’ll cover how to choose samples that reflect decision impact, including oversampling edge cases, high-risk categories, and scenarios that historically produce complaints or manual overrides. You’ll also learn how to tie sampling to criteria, such as fairness thresholds, policy boundaries, and escalation requirements, so the sample proves whether controls operate as intended. Practical considerations will include ensuring your sample can be traced to logs, model versions, and data states, so results are reproducible and not disputed as “from a different model.” By the end, you should be able to choose exam answers that use sampling as a detection tool for real-world harm, not just as a box-checking method. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b8b3d49e/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 97 — Test AI controls with evidence, not opinions or vendor demos (Domain 3B)</title>
      <itunes:episode>97</itunes:episode>
      <podcast:episode>97</podcast:episode>
      <itunes:title>Episode 97 — Test AI controls with evidence, not opinions or vendor demos (Domain 3B)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">09426096-97dc-4d7a-9c3e-6b7344a977a6</guid>
      <link>https://share.transistor.fm/s/64cc42e7</link>
      <description>
        <![CDATA[<p>This episode teaches you how to test AI controls using evidence, because Domain 3B scenarios often tempt you to accept “trust me” statements, impressive demos, or subjective opinions as proof. You’ll learn how to define what evidence is required for common AI controls, such as approvals for model changes, validation reports tied to acceptance criteria, monitoring configurations with thresholds and escalation, access controls with logs, and supervision workflows with reviewer records. We’ll cover how to handle vendor-provided evidence by validating relevance, scope, timeliness, and responsibility splits, instead of assuming a generic report proves control effectiveness in your environment. You’ll also learn how to separate control design from operating effectiveness by looking for repeated performance over time, including trend reports, incident records, and follow-up actions that show governance responds to what monitoring reveals. By the end, you should be able to answer exam questions by selecting the option that produces verifiable evidence and traceable accountability, not the option that sounds most confident. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to test AI controls using evidence, because Domain 3B scenarios often tempt you to accept “trust me” statements, impressive demos, or subjective opinions as proof. You’ll learn how to define what evidence is required for common AI controls, such as approvals for model changes, validation reports tied to acceptance criteria, monitoring configurations with thresholds and escalation, access controls with logs, and supervision workflows with reviewer records. We’ll cover how to handle vendor-provided evidence by validating relevance, scope, timeliness, and responsibility splits, instead of assuming a generic report proves control effectiveness in your environment. You’ll also learn how to separate control design from operating effectiveness by looking for repeated performance over time, including trend reports, incident records, and follow-up actions that show governance responds to what monitoring reveals. By the end, you should be able to answer exam questions by selecting the option that produces verifiable evidence and traceable accountability, not the option that sounds most confident. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:22:06 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/64cc42e7/18fdce33.mp3" length="33376974" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>834</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to test AI controls using evidence, because Domain 3B scenarios often tempt you to accept “trust me” statements, impressive demos, or subjective opinions as proof. You’ll learn how to define what evidence is required for common AI controls, such as approvals for model changes, validation reports tied to acceptance criteria, monitoring configurations with thresholds and escalation, access controls with logs, and supervision workflows with reviewer records. We’ll cover how to handle vendor-provided evidence by validating relevance, scope, timeliness, and responsibility splits, instead of assuming a generic report proves control effectiveness in your environment. You’ll also learn how to separate control design from operating effectiveness by looking for repeated performance over time, including trend reports, incident records, and follow-up actions that show governance responds to what monitoring reveals. By the end, you should be able to answer exam questions by selecting the option that produces verifiable evidence and traceable accountability, not the option that sounds most confident. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/64cc42e7/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 98 — Collect AI audit evidence: logs, lineage, artifacts, and change records (Domain 3C)</title>
      <itunes:episode>98</itunes:episode>
      <podcast:episode>98</podcast:episode>
      <itunes:title>Episode 98 — Collect AI audit evidence: logs, lineage, artifacts, and change records (Domain 3C)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">392fc820-2941-431d-8217-278acf6571a2</guid>
      <link>https://share.transistor.fm/s/2d1384a8</link>
      <description>
        <![CDATA[<p>This episode explains how to collect AI audit evidence across logs, lineage, artifacts, and change records, because Domain 3C expects you to prove what happened, when it happened, and under which model and data conditions. You’ll learn how operational logs support questions about access, inference usage, exceptions, and incidents, while lineage artifacts support questions about where data came from, how it changed, and how it was used in training and validation. We’ll cover model and pipeline artifacts such as version histories, configuration baselines, validation results, and release packages that tie behavior to controlled approvals. Change records will be treated as the backbone of accountability, linking updates to risk assessments, test evidence, approvals, and post-change monitoring. You’ll also learn how to avoid evidence traps, such as collecting documentation that is not tied to the current release, or accepting screenshots and summaries without underlying records. By the end, you should be able to choose exam answers that prioritize evidence that is traceable, repeatable, and linked to specific AI behavior in production. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains how to collect AI audit evidence across logs, lineage, artifacts, and change records, because Domain 3C expects you to prove what happened, when it happened, and under which model and data conditions. You’ll learn how operational logs support questions about access, inference usage, exceptions, and incidents, while lineage artifacts support questions about where data came from, how it changed, and how it was used in training and validation. We’ll cover model and pipeline artifacts such as version histories, configuration baselines, validation results, and release packages that tie behavior to controlled approvals. Change records will be treated as the backbone of accountability, linking updates to risk assessments, test evidence, approvals, and post-change monitoring. You’ll also learn how to avoid evidence traps, such as collecting documentation that is not tied to the current release, or accepting screenshots and summaries without underlying records. By the end, you should be able to choose exam answers that prioritize evidence that is traceable, repeatable, and linked to specific AI behavior in production. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:22:30 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/2d1384a8/cca16ed4.mp3" length="34054090" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>850</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains how to collect AI audit evidence across logs, lineage, artifacts, and change records, because Domain 3C expects you to prove what happened, when it happened, and under which model and data conditions. You’ll learn how operational logs support questions about access, inference usage, exceptions, and incidents, while lineage artifacts support questions about where data came from, how it changed, and how it was used in training and validation. We’ll cover model and pipeline artifacts such as version histories, configuration baselines, validation results, and release packages that tie behavior to controlled approvals. Change records will be treated as the backbone of accountability, linking updates to risk assessments, test evidence, approvals, and post-change monitoring. You’ll also learn how to avoid evidence traps, such as collecting documentation that is not tied to the current release, or accepting screenshots and summaries without underlying records. By the end, you should be able to choose exam answers that prioritize evidence that is traceable, repeatable, and linked to specific AI behavior in production. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/2d1384a8/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 99 — Validate evidence integrity when models and data change over time (Domain 3C)</title>
      <itunes:episode>99</itunes:episode>
      <podcast:episode>99</podcast:episode>
      <itunes:title>Episode 99 — Validate evidence integrity when models and data change over time (Domain 3C)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">071183d2-5d08-4681-867d-deb958ef2e0a</guid>
      <link>https://share.transistor.fm/s/b3d84a8c</link>
      <description>
        <![CDATA[<p>This episode focuses on validating evidence integrity in environments where models and data change over time, because AI auditing fails quickly when you cannot prove which version produced which outcome. You’ll learn how to confirm that evidence is complete, consistent, and tied to specific model versions, configuration states, and data snapshots, so findings cannot be dismissed as “from before the update.” We’ll cover integrity risks like missing logs, overwritten configuration records, undocumented retraining, uncontrolled dataset changes, and vendor updates that alter behavior without clear notification. You’ll also learn practical integrity checks, such as reconciling timestamps across systems, verifying immutable logging where appropriate, sampling change events back to approvals, and validating that lineage artifacts match actual pipeline behavior. The goal is to help you answer AAIA scenarios by selecting the approach that preserves chain-of-custody thinking for AI evidence, enabling defensible conclusions even in fast-moving operational environments. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on validating evidence integrity in environments where models and data change over time, because AI auditing fails quickly when you cannot prove which version produced which outcome. You’ll learn how to confirm that evidence is complete, consistent, and tied to specific model versions, configuration states, and data snapshots, so findings cannot be dismissed as “from before the update.” We’ll cover integrity risks like missing logs, overwritten configuration records, undocumented retraining, uncontrolled dataset changes, and vendor updates that alter behavior without clear notification. You’ll also learn practical integrity checks, such as reconciling timestamps across systems, verifying immutable logging where appropriate, sampling change events back to approvals, and validating that lineage artifacts match actual pipeline behavior. The goal is to help you answer AAIA scenarios by selecting the approach that preserves chain-of-custody thinking for AI evidence, enabling defensible conclusions even in fast-moving operational environments. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:22:52 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/b3d84a8c/267842a4.mp3" length="36894111" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>921</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on validating evidence integrity in environments where models and data change over time, because AI auditing fails quickly when you cannot prove which version produced which outcome. You’ll learn how to confirm that evidence is complete, consistent, and tied to specific model versions, configuration states, and data snapshots, so findings cannot be dismissed as “from before the update.” We’ll cover integrity risks like missing logs, overwritten configuration records, undocumented retraining, uncontrolled dataset changes, and vendor updates that alter behavior without clear notification. You’ll also learn practical integrity checks, such as reconciling timestamps across systems, verifying immutable logging where appropriate, sampling change events back to approvals, and validating that lineage artifacts match actual pipeline behavior. The goal is to help you answer AAIA scenarios by selecting the approach that preserves chain-of-custody thinking for AI evidence, enabling defensible conclusions even in fast-moving operational environments. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b3d84a8c/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 100 — Audit data quality before trusting any AI output or model score (Domain 3D)</title>
      <itunes:episode>100</itunes:episode>
      <podcast:episode>100</podcast:episode>
      <itunes:title>Episode 100 — Audit data quality before trusting any AI output or model score (Domain 3D)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ac059007-07cb-48da-948a-394815977b67</guid>
      <link>https://share.transistor.fm/s/2b8dd4e0</link>
      <description>
        <![CDATA[<p>This episode teaches you why auditing data quality must happen before you trust any AI output or model score, because Domain 3D scenarios often hinge on the fact that “good models” fail when inputs are wrong, incomplete, biased, or out of date. You’ll learn how to evaluate data quality dimensions that matter for audit conclusions—accuracy, completeness, consistency, timeliness, representativeness, and label reliability—and how each dimension maps to specific decision risks like unfair outcomes, unstable performance, and undetected drift. We’ll cover how to test data quality using pipeline validation logs, exception handling records, sampling of source data, and comparisons across segments that reveal representation gaps and uneven error patterns. You’ll also learn how quality controls should be evidenced over time, including monitoring thresholds, remediation workflows, and governance decisions when quality issues require limiting automation or revisiting requirements. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you why auditing data quality must happen before you trust any AI output or model score, because Domain 3D scenarios often hinge on the fact that “good models” fail when inputs are wrong, incomplete, biased, or out of date. You’ll learn how to evaluate data quality dimensions that matter for audit conclusions—accuracy, completeness, consistency, timeliness, representativeness, and label reliability—and how each dimension maps to specific decision risks like unfair outcomes, unstable performance, and undetected drift. We’ll cover how to test data quality using pipeline validation logs, exception handling records, sampling of source data, and comparisons across segments that reveal representation gaps and uneven error patterns. You’ll also learn how quality controls should be evidenced over time, including monitoring thresholds, remediation workflows, and governance decisions when quality issues require limiting automation or revisiting requirements. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:23:31 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/2b8dd4e0/8c881b1b.mp3" length="36861718" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>921</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you why auditing data quality must happen before you trust any AI output or model score, because Domain 3D scenarios often hinge on the fact that “good models” fail when inputs are wrong, incomplete, biased, or out of date. You’ll learn how to evaluate data quality dimensions that matter for audit conclusions—accuracy, completeness, consistency, timeliness, representativeness, and label reliability—and how each dimension maps to specific decision risks like unfair outcomes, unstable performance, and undetected drift. We’ll cover how to test data quality using pipeline validation logs, exception handling records, sampling of source data, and comparisons across segments that reveal representation gaps and uneven error patterns. You’ll also learn how quality controls should be evidenced over time, including monitoring thresholds, remediation workflows, and governance decisions when quality issues require limiting automation or revisiting requirements. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/2b8dd4e0/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 101 — Use analytics to detect drift, anomalies, and control breakdown trends (Domain 3D)</title>
      <itunes:episode>101</itunes:episode>
      <podcast:episode>101</podcast:episode>
      <itunes:title>Episode 101 — Use analytics to detect drift, anomalies, and control breakdown trends (Domain 3D)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b241d703-2632-425f-ab4b-967a638264d0</guid>
      <link>https://share.transistor.fm/s/650f38a2</link>
      <description>
        <![CDATA[<p>This episode focuses on using analytics as an audit technique to detect drift, anomalies, and control breakdown trends, because Domain 3D expects you to go beyond spot checks and prove what is happening over time. You’ll learn how to use trend analysis across model performance, outcome distributions, exception rates, manual overrides, and complaint signals to identify early warnings that controls are weakening or that the operating environment has changed. We’ll cover how analytics supports audit conclusions by helping you select higher-risk samples, validate whether monitoring thresholds are meaningful, and detect “silent failures” where metrics look fine in aggregate but break down across segments or specific decision types. You’ll also learn how to tie analytic results back to evidence sources like version histories, change tickets, lineage artifacts, and monitoring configurations so findings are defensible and reproducible. By the end, you should be able to answer AAIA scenarios by choosing analytic approaches that reveal control effectiveness and emerging risk, not just produce charts that no one can act on. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on using analytics as an audit technique to detect drift, anomalies, and control breakdown trends, because Domain 3D expects you to go beyond spot checks and prove what is happening over time. You’ll learn how to use trend analysis across model performance, outcome distributions, exception rates, manual overrides, and complaint signals to identify early warnings that controls are weakening or that the operating environment has changed. We’ll cover how analytics supports audit conclusions by helping you select higher-risk samples, validate whether monitoring thresholds are meaningful, and detect “silent failures” where metrics look fine in aggregate but break down across segments or specific decision types. You’ll also learn how to tie analytic results back to evidence sources like version histories, change tickets, lineage artifacts, and monitoring configurations so findings are defensible and reproducible. By the end, you should be able to answer AAIA scenarios by choosing analytic approaches that reveal control effectiveness and emerging risk, not just produce charts that no one can act on. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:24:07 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/650f38a2/24d682e1.mp3" length="40689193" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>1016</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on using analytics as an audit technique to detect drift, anomalies, and control breakdown trends, because Domain 3D expects you to go beyond spot checks and prove what is happening over time. You’ll learn how to use trend analysis across model performance, outcome distributions, exception rates, manual overrides, and complaint signals to identify early warnings that controls are weakening or that the operating environment has changed. We’ll cover how analytics supports audit conclusions by helping you select higher-risk samples, validate whether monitoring thresholds are meaningful, and detect “silent failures” where metrics look fine in aggregate but break down across segments or specific decision types. You’ll also learn how to tie analytic results back to evidence sources like version histories, change tickets, lineage artifacts, and monitoring configurations so findings are defensible and reproducible. By the end, you should be able to answer AAIA scenarios by choosing analytic approaches that reveal control effectiveness and emerging risk, not just produce charts that no one can act on. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/650f38a2/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 102 — Deliver AI audit reports executives understand and teams can act on (Domain 3E)</title>
      <itunes:episode>102</itunes:episode>
      <podcast:episode>102</podcast:episode>
      <itunes:title>Episode 102 — Deliver AI audit reports executives understand and teams can act on (Domain 3E)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d7cd0457-47b1-4e20-b26f-790eac11b5cc</guid>
      <link>https://share.transistor.fm/s/b4ec49db</link>
      <description>
        <![CDATA[<p>This episode teaches you how to deliver AI audit reports that executives understand and teams can act on, because Domain 3E often tests whether you can translate technical and governance issues into clear, risk-based communication. You’ll learn how to structure reporting around business impact and decision risk, not around model jargon, while still being precise about criteria, evidence, and control gaps. We’ll cover how to describe AI issues in plain governance language, such as unclear ownership, weak change control, inadequate monitoring triggers, or insufficient supervision of high-impact decisions, and how to connect those issues to potential harm and compliance exposure. You’ll also learn how to write recommendations that are actionable, scoped, and testable, including who should own the fix, what evidence should exist after remediation, and what timeline makes sense based on risk. By the end, you should be able to choose exam answers that emphasize clarity, defensibility, and actionability in audit reporting, rather than overly technical narratives that stall remediation. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to deliver AI audit reports that executives understand and teams can act on, because Domain 3E often tests whether you can translate technical and governance issues into clear, risk-based communication. You’ll learn how to structure reporting around business impact and decision risk, not around model jargon, while still being precise about criteria, evidence, and control gaps. We’ll cover how to describe AI issues in plain governance language, such as unclear ownership, weak change control, inadequate monitoring triggers, or insufficient supervision of high-impact decisions, and how to connect those issues to potential harm and compliance exposure. You’ll also learn how to write recommendations that are actionable, scoped, and testable, including who should own the fix, what evidence should exist after remediation, and what timeline makes sense based on risk. By the end, you should be able to choose exam answers that emphasize clarity, defensibility, and actionability in audit reporting, rather than overly technical narratives that stall remediation. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:24:48 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/b4ec49db/1f2ae00e.mp3" length="33160698" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>828</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to deliver AI audit reports that executives understand and teams can act on, because Domain 3E often tests whether you can translate technical and governance issues into clear, risk-based communication. You’ll learn how to structure reporting around business impact and decision risk, not around model jargon, while still being precise about criteria, evidence, and control gaps. We’ll cover how to describe AI issues in plain governance language, such as unclear ownership, weak change control, inadequate monitoring triggers, or insufficient supervision of high-impact decisions, and how to connect those issues to potential harm and compliance exposure. You’ll also learn how to write recommendations that are actionable, scoped, and testable, including who should own the fix, what evidence should exist after remediation, and what timeline makes sense based on risk. By the end, you should be able to choose exam answers that emphasize clarity, defensibility, and actionability in audit reporting, rather than overly technical narratives that stall remediation. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b4ec49db/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 103 — Write AI findings that tie cause, risk, evidence, and remediation together (Domain 3E)</title>
      <itunes:episode>103</itunes:episode>
      <podcast:episode>103</podcast:episode>
      <itunes:title>Episode 103 — Write AI findings that tie cause, risk, evidence, and remediation together (Domain 3E)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">60f27c51-5c66-42b2-bc37-e91d01720c6a</guid>
      <link>https://share.transistor.fm/s/4a246700</link>
      <description>
        <![CDATA[<p>This episode focuses on writing AI audit findings that tie cause, risk, evidence, and remediation into one coherent story, because Domain 3E expects findings to be defensible and useful, not just critical. You’ll learn how to describe the condition clearly, reference the criteria it violates, and present evidence that is traceable to model versions, data states, and control operation records. We’ll cover how to identify root cause without guessing, using signals like missing approvals, incomplete lineage, weak monitoring triggers, unclear ownership, or inadequate reviewer capacity that leads to unchecked harmful outcomes. You’ll also learn how to express risk in outcome terms—who could be harmed, how quickly harm is detected, how reversible it is—and how to propose remediation that closes the control gap with measurable steps and ownership. By the end, you should be able to answer AAIA scenarios by selecting the finding approach that is complete, evidence-driven, and directly actionable, rather than writing vague observations that cannot be fixed or retested. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on writing AI audit findings that tie cause, risk, evidence, and remediation into one coherent story, because Domain 3E expects findings to be defensible and useful, not just critical. You’ll learn how to describe the condition clearly, reference the criteria it violates, and present evidence that is traceable to model versions, data states, and control operation records. We’ll cover how to identify root cause without guessing, using signals like missing approvals, incomplete lineage, weak monitoring triggers, unclear ownership, or inadequate reviewer capacity that leads to unchecked harmful outcomes. You’ll also learn how to express risk in outcome terms—who could be harmed, how quickly harm is detected, how reversible it is—and how to propose remediation that closes the control gap with measurable steps and ownership. By the end, you should be able to answer AAIA scenarios by selecting the finding approach that is complete, evidence-driven, and directly actionable, rather than writing vague observations that cannot be fixed or retested. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:25:14 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/4a246700/610a1f28.mp3" length="32867095" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>821</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on writing AI audit findings that tie cause, risk, evidence, and remediation into one coherent story, because Domain 3E expects findings to be defensible and useful, not just critical. You’ll learn how to describe the condition clearly, reference the criteria it violates, and present evidence that is traceable to model versions, data states, and control operation records. We’ll cover how to identify root cause without guessing, using signals like missing approvals, incomplete lineage, weak monitoring triggers, unclear ownership, or inadequate reviewer capacity that leads to unchecked harmful outcomes. You’ll also learn how to express risk in outcome terms—who could be harmed, how quickly harm is detected, how reversible it is—and how to propose remediation that closes the control gap with measurable steps and ownership. By the end, you should be able to answer AAIA scenarios by selecting the finding approach that is complete, evidence-driven, and directly actionable, rather than writing vague observations that cannot be fixed or retested. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/4a246700/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 104 — Follow up AI audits so fixes stick and risk stays reduced (Domain 3E)</title>
      <itunes:episode>104</itunes:episode>
      <podcast:episode>104</podcast:episode>
      <itunes:title>Episode 104 — Follow up AI audits so fixes stick and risk stays reduced (Domain 3E)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">751c2740-a41b-476f-a94c-44c6fc206666</guid>
      <link>https://share.transistor.fm/s/471223fb</link>
      <description>
        <![CDATA[<p>This episode explains how to follow up AI audits so remediation actually sticks and risk stays reduced, because Domain 3E recognizes that AI environments change quickly and “we fixed it” can evaporate after the next retrain or deployment. You’ll learn how to design follow-up work that verifies corrective actions are implemented, operating, and still aligned to the original criteria, including evidence checks like updated monitoring rules, documented approvals, improved lineage records, revised reviewer guidance, and confirmed access control changes. We’ll cover how to validate effectiveness using trend data, such as reduced exception volume, faster escalations, fewer repeat incidents, and more consistent documentation quality in change packages. You’ll also learn how to manage follow-up when remediation depends on vendors, shared platforms, or multiple teams, and how to document residual risk if timelines slip. By the end, you should be able to choose exam answers that treat follow-up as ongoing assurance with measurable verification, not a one-time status request or a closed ticket with no proof. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode explains how to follow up AI audits so remediation actually sticks and risk stays reduced, because Domain 3E recognizes that AI environments change quickly and “we fixed it” can evaporate after the next retrain or deployment. You’ll learn how to design follow-up work that verifies corrective actions are implemented, operating, and still aligned to the original criteria, including evidence checks like updated monitoring rules, documented approvals, improved lineage records, revised reviewer guidance, and confirmed access control changes. We’ll cover how to validate effectiveness using trend data, such as reduced exception volume, faster escalations, fewer repeat incidents, and more consistent documentation quality in change packages. You’ll also learn how to manage follow-up when remediation depends on vendors, shared platforms, or multiple teams, and how to document residual risk if timelines slip. By the end, you should be able to choose exam answers that treat follow-up as ongoing assurance with measurable verification, not a one-time status request or a closed ticket with no proof. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:25:43 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/471223fb/0c975135.mp3" length="32682114" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>816</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode explains how to follow up AI audits so remediation actually sticks and risk stays reduced, because Domain 3E recognizes that AI environments change quickly and “we fixed it” can evaporate after the next retrain or deployment. You’ll learn how to design follow-up work that verifies corrective actions are implemented, operating, and still aligned to the original criteria, including evidence checks like updated monitoring rules, documented approvals, improved lineage records, revised reviewer guidance, and confirmed access control changes. We’ll cover how to validate effectiveness using trend data, such as reduced exception volume, faster escalations, fewer repeat incidents, and more consistent documentation quality in change packages. You’ll also learn how to manage follow-up when remediation depends on vendors, shared platforms, or multiple teams, and how to document residual risk if timelines slip. By the end, you should be able to choose exam answers that treat follow-up as ongoing assurance with measurable verification, not a one-time status request or a closed ticket with no proof. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/471223fb/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 105 — Evaluate impacts and risk when integrating AI into the audit process (Task 22)</title>
      <itunes:episode>105</itunes:episode>
      <podcast:episode>105</podcast:episode>
      <itunes:title>Episode 105 — Evaluate impacts and risk when integrating AI into the audit process (Task 22)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bfb64424-a860-4e9f-91be-a81a6b595cbe</guid>
      <link>https://share.transistor.fm/s/22e42b67</link>
      <description>
        <![CDATA[<p>This episode focuses on Task 22 by evaluating impacts and risk when AI is integrated into the audit process itself, because AAIA expects you to govern AI use in assurance work with the same discipline you audit in others. You’ll learn how audit AI can introduce new risks, such as confidentiality exposure through data sharing, biased analysis that skews audit focus, and overconfidence in automated summaries that miss control failures. We’ll cover how to assess whether AI tools align with audit objectives, whether their limitations are understood, and what controls are needed around data handling, access, logging, and output validation. You’ll also learn how to evaluate governance decisions about when AI can assist versus when human judgment must lead, especially for scope decisions, risk ratings, and conclusions that require defensible reasoning. By the end, you should be able to answer exam scenarios by selecting the approach that integrates AI with clear boundaries, documented oversight, and evidence of validation, rather than treating AI as a shortcut that undermines audit quality. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on Task 22 by evaluating impacts and risk when AI is integrated into the audit process itself, because AAIA expects you to govern AI use in assurance work with the same discipline you audit in others. You’ll learn how audit AI can introduce new risks, such as confidentiality exposure through data sharing, biased analysis that skews audit focus, and overconfidence in automated summaries that miss control failures. We’ll cover how to assess whether AI tools align with audit objectives, whether their limitations are understood, and what controls are needed around data handling, access, logging, and output validation. You’ll also learn how to evaluate governance decisions about when AI can assist versus when human judgment must lead, especially for scope decisions, risk ratings, and conclusions that require defensible reasoning. By the end, you should be able to answer exam scenarios by selecting the approach that integrates AI with clear boundaries, documented oversight, and evidence of validation, rather than treating AI as a shortcut that undermines audit quality. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:26:17 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/22e42b67/520c6938.mp3" length="32583912" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>814</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on Task 22 by evaluating impacts and risk when AI is integrated into the audit process itself, because AAIA expects you to govern AI use in assurance work with the same discipline you audit in others. You’ll learn how audit AI can introduce new risks, such as confidentiality exposure through data sharing, biased analysis that skews audit focus, and overconfidence in automated summaries that miss control failures. We’ll cover how to assess whether AI tools align with audit objectives, whether their limitations are understood, and what controls are needed around data handling, access, logging, and output validation. You’ll also learn how to evaluate governance decisions about when AI can assist versus when human judgment must lead, especially for scope decisions, risk ratings, and conclusions that require defensible reasoning. By the end, you should be able to answer exam scenarios by selecting the approach that integrates AI with clear boundaries, documented oversight, and evidence of validation, rather than treating AI as a shortcut that undermines audit quality. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/22e42b67/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 106 — Prevent AI-in-audit blind spots: bias, leakage, and overreliance risks (Task 22)</title>
      <itunes:episode>106</itunes:episode>
      <podcast:episode>106</podcast:episode>
      <itunes:title>Episode 106 — Prevent AI-in-audit blind spots: bias, leakage, and overreliance risks (Task 22)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">abc0d8dc-474e-4ac2-a324-ae56f6145cf7</guid>
      <link>https://share.transistor.fm/s/758f2642</link>
      <description>
        <![CDATA[<p>This episode teaches you how to prevent AI-in-audit blind spots, with a focus on three risks that show up in Task 22 scenarios: bias, leakage, and overreliance. You’ll learn how audit AI can reflect biased training data or biased prompts, leading to uneven scrutiny across teams or systems, and how to counter that with review practices, diverse sampling, and validation against independent evidence. We’ll cover leakage risks where sensitive audit information is exposed through tool usage, storage, or vendor handling, and what controls reduce exposure, including data minimization, access restrictions, redaction, and clear tool configuration. Overreliance will be treated as a professional risk: trusting AI-generated conclusions, missing contradictions in evidence, or skipping interviews and testing because outputs “seem right.” By the end, you should be able to answer AAIA scenarios by choosing safeguards that keep auditors accountable, protect confidentiality, and ensure AI outputs are verified before they influence audit judgments. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to prevent AI-in-audit blind spots, with a focus on three risks that show up in Task 22 scenarios: bias, leakage, and overreliance. You’ll learn how audit AI can reflect biased training data or biased prompts, leading to uneven scrutiny across teams or systems, and how to counter that with review practices, diverse sampling, and validation against independent evidence. We’ll cover leakage risks where sensitive audit information is exposed through tool usage, storage, or vendor handling, and what controls reduce exposure, including data minimization, access restrictions, redaction, and clear tool configuration. Overreliance will be treated as a professional risk: trusting AI-generated conclusions, missing contradictions in evidence, or skipping interviews and testing because outputs “seem right.” By the end, you should be able to answer AAIA scenarios by choosing safeguards that keep auditors accountable, protect confidentiality, and ensure AI outputs are verified before they influence audit judgments. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:26:38 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/758f2642/7ca0ca41.mp3" length="32741695" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>818</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to prevent AI-in-audit blind spots, with a focus on three risks that show up in Task 22 scenarios: bias, leakage, and overreliance. You’ll learn how audit AI can reflect biased training data or biased prompts, leading to uneven scrutiny across teams or systems, and how to counter that with review practices, diverse sampling, and validation against independent evidence. We’ll cover leakage risks where sensitive audit information is exposed through tool usage, storage, or vendor handling, and what controls reduce exposure, including data minimization, access restrictions, redaction, and clear tool configuration. Overreliance will be treated as a professional risk: trusting AI-generated conclusions, missing contradictions in evidence, or skipping interviews and testing because outputs “seem right.” By the end, you should be able to answer AAIA scenarios by choosing safeguards that keep auditors accountable, protect confidentiality, and ensure AI outputs are verified before they influence audit judgments. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/758f2642/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 107 — Utilize AI to enhance audit planning without outsourcing judgment (Task 23)</title>
      <itunes:episode>107</itunes:episode>
      <podcast:episode>107</podcast:episode>
      <itunes:title>Episode 107 — Utilize AI to enhance audit planning without outsourcing judgment (Task 23)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">66b63081-d53e-44f7-a058-79cd2c24a74b</guid>
      <link>https://share.transistor.fm/s/0925546c</link>
      <description>
        <![CDATA[<p>This episode focuses on Task 23 by showing how to use AI to enhance audit planning without outsourcing professional judgment, because AAIA expects you to treat AI as an assistant to thinking, not a replacement for accountability. You’ll learn how AI can help organize background information, identify potential risk themes, draft preliminary scopes, and suggest interview questions, while you remain responsible for validating relevance and selecting criteria. We’ll cover guardrails for planning use, including limiting sensitive data exposure, documenting how AI outputs were used, and validating suggestions against policies, prior audit results, and real organizational context. You’ll also learn how to avoid planning failures like letting AI narrow scope too aggressively, missing emerging risks, or treating generic framework language as organization-specific criteria. By the end, you should be able to answer exam scenarios by selecting the approach that uses AI to accelerate planning tasks while preserving human control over scope, risk assessment, and audit objectives. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on Task 23 by showing how to use AI to enhance audit planning without outsourcing professional judgment, because AAIA expects you to treat AI as an assistant to thinking, not a replacement for accountability. You’ll learn how AI can help organize background information, identify potential risk themes, draft preliminary scopes, and suggest interview questions, while you remain responsible for validating relevance and selecting criteria. We’ll cover guardrails for planning use, including limiting sensitive data exposure, documenting how AI outputs were used, and validating suggestions against policies, prior audit results, and real organizational context. You’ll also learn how to avoid planning failures like letting AI narrow scope too aggressively, missing emerging risks, or treating generic framework language as organization-specific criteria. By the end, you should be able to answer exam scenarios by selecting the approach that uses AI to accelerate planning tasks while preserving human control over scope, risk assessment, and audit objectives. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:27:03 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/0925546c/e01df946.mp3" length="32741685" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>818</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on Task 23 by showing how to use AI to enhance audit planning without outsourcing professional judgment, because AAIA expects you to treat AI as an assistant to thinking, not a replacement for accountability. You’ll learn how AI can help organize background information, identify potential risk themes, draft preliminary scopes, and suggest interview questions, while you remain responsible for validating relevance and selecting criteria. We’ll cover guardrails for planning use, including limiting sensitive data exposure, documenting how AI outputs were used, and validating suggestions against policies, prior audit results, and real organizational context. You’ll also learn how to avoid planning failures like letting AI narrow scope too aggressively, missing emerging risks, or treating generic framework language as organization-specific criteria. By the end, you should be able to answer exam scenarios by selecting the approach that uses AI to accelerate planning tasks while preserving human control over scope, risk assessment, and audit objectives. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/0925546c/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 108 — Utilize AI to enhance audit execution while preserving evidence quality (Task 23)</title>
      <itunes:episode>108</itunes:episode>
      <podcast:episode>108</podcast:episode>
      <itunes:title>Episode 108 — Utilize AI to enhance audit execution while preserving evidence quality (Task 23)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2eee4ed6-af39-4b5b-bddc-93839adf5171</guid>
      <link>https://share.transistor.fm/s/384dcd4d</link>
      <description>
        <![CDATA[<p>This episode teaches you how to use AI to enhance audit execution while preserving evidence quality, because Task 23 scenarios often test whether efficiency improvements still produce defensible workpapers and conclusions. You’ll learn where AI can assist safely, such as summarizing large policy sets, clustering exceptions, proposing sample stratification ideas, and drafting test steps, while you maintain control over evidence collection, evaluation, and documentation. We’ll cover how to preserve evidence quality by grounding AI-assisted outputs in original records, retaining traceability to source artifacts, and documenting what was verified versus what was merely suggested. You’ll also learn how to avoid execution risks like accepting AI-generated interpretations of logs without validation, losing version context for models and data, or letting AI narratives replace actual control testing. By the end, you should be able to answer AAIA questions by choosing AI usage patterns that improve speed but keep audit evidence reliable, traceable, and reviewable. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode teaches you how to use AI to enhance audit execution while preserving evidence quality, because Task 23 scenarios often test whether efficiency improvements still produce defensible workpapers and conclusions. You’ll learn where AI can assist safely, such as summarizing large policy sets, clustering exceptions, proposing sample stratification ideas, and drafting test steps, while you maintain control over evidence collection, evaluation, and documentation. We’ll cover how to preserve evidence quality by grounding AI-assisted outputs in original records, retaining traceability to source artifacts, and documenting what was verified versus what was merely suggested. You’ll also learn how to avoid execution risks like accepting AI-generated interpretations of logs without validation, losing version context for models and data, or letting AI narratives replace actual control testing. By the end, you should be able to answer AAIA questions by choosing AI usage patterns that improve speed but keep audit evidence reliable, traceable, and reviewable. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:27:36 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/384dcd4d/ad3fedf1.mp3" length="29215167" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>730</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode teaches you how to use AI to enhance audit execution while preserving evidence quality, because Task 23 scenarios often test whether efficiency improvements still produce defensible workpapers and conclusions. You’ll learn where AI can assist safely, such as summarizing large policy sets, clustering exceptions, proposing sample stratification ideas, and drafting test steps, while you maintain control over evidence collection, evaluation, and documentation. We’ll cover how to preserve evidence quality by grounding AI-assisted outputs in original records, retaining traceability to source artifacts, and documenting what was verified versus what was merely suggested. You’ll also learn how to avoid execution risks like accepting AI-generated interpretations of logs without validation, losing version context for models and data, or letting AI narratives replace actual control testing. By the end, you should be able to answer AAIA questions by choosing AI usage patterns that improve speed but keep audit evidence reliable, traceable, and reviewable. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/384dcd4d/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 109 — Utilize AI to enhance audit reporting without hallucinated conclusions (Task 23)</title>
      <itunes:episode>109</itunes:episode>
      <podcast:episode>109</podcast:episode>
      <itunes:title>Episode 109 — Utilize AI to enhance audit reporting without hallucinated conclusions (Task 23)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1aa6c186-68fa-4de7-afac-abd4e15c94f6</guid>
      <link>https://share.transistor.fm/s/3b3e2427</link>
      <description>
        <![CDATA[<p>This episode focuses on using AI to enhance audit reporting without hallucinated conclusions, because Task 23 expects you to recognize that confident language is not evidence and that AI can generate plausible but unsupported statements. You’ll learn how AI can help draft report structure, improve clarity, and standardize wording, while you enforce strict sourcing: every key claim must map back to criteria, evidence, and observed conditions. We’ll cover practical controls such as requiring citations to internal workpapers, limiting AI to language refinement rather than fact creation, and using review checkpoints to validate that summaries do not introduce new assertions. You’ll also learn how to handle nuanced risk statements so they remain accurate, such as describing drift risk, bias exposure, or monitoring weaknesses without overstating certainty or underplaying impact. By the end, you should be able to answer AAIA scenarios by selecting the approach that uses AI to improve communication while keeping conclusions grounded, defensible, and fully supported by evidence. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode focuses on using AI to enhance audit reporting without hallucinated conclusions, because Task 23 expects you to recognize that confident language is not evidence and that AI can generate plausible but unsupported statements. You’ll learn how AI can help draft report structure, improve clarity, and standardize wording, while you enforce strict sourcing: every key claim must map back to criteria, evidence, and observed conditions. We’ll cover practical controls such as requiring citations to internal workpapers, limiting AI to language refinement rather than fact creation, and using review checkpoints to validate that summaries do not introduce new assertions. You’ll also learn how to handle nuanced risk statements so they remain accurate, such as describing drift risk, bias exposure, or monitoring weaknesses without overstating certainty or underplaying impact. By the end, you should be able to answer AAIA scenarios by selecting the approach that uses AI to improve communication while keeping conclusions grounded, defensible, and fully supported by evidence. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:27:59 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/3b3e2427/7161c260.mp3" length="33556716" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>838</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode focuses on using AI to enhance audit reporting without hallucinated conclusions, because Task 23 expects you to recognize that confident language is not evidence and that AI can generate plausible but unsupported statements. You’ll learn how AI can help draft report structure, improve clarity, and standardize wording, while you enforce strict sourcing: every key claim must map back to criteria, evidence, and observed conditions. We’ll cover practical controls such as requiring citations to internal workpapers, limiting AI to language refinement rather than fact creation, and using review checkpoints to validate that summaries do not introduce new assertions. You’ll also learn how to handle nuanced risk statements so they remain accurate, such as describing drift risk, bias exposure, or monitoring weaknesses without overstating certainty or underplaying impact. By the end, you should be able to answer AAIA scenarios by selecting the approach that uses AI to improve communication while keeping conclusions grounded, defensible, and fully supported by evidence. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/3b3e2427/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 110 — Spaced Retrieval Review: Domain 3 audit tools and techniques, simplified (Review: Domain 3)</title>
      <itunes:episode>110</itunes:episode>
      <podcast:episode>110</podcast:episode>
      <itunes:title>Episode 110 — Spaced Retrieval Review: Domain 3 audit tools and techniques, simplified (Review: Domain 3)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f55480cd-a2a5-40bc-99e7-10c576ec7d07</guid>
      <link>https://share.transistor.fm/s/538b7d66</link>
      <description>
        <![CDATA[<p>This review episode reinforces Domain 3 by walking through the audit toolset you need—planning, criteria, testing methods, sampling, evidence integrity, analytics, and reporting—in a single connected flow that matches exam logic. You’ll revisit how to define scope around decision impact, convert policies and obligations into measurable criteria, select AI-aware audit techniques, and collect evidence that is traceable to model versions, data states, and change records. We’ll refresh sampling strategies that reveal bias and failure modes, and the integrity checks that prevent findings from being dismissed as “from a different version.” You’ll also reinforce how to communicate results with findings that connect cause, risk, evidence, and remediation, and how follow-up keeps improvements durable as models and data evolve. By the end, Domain 3 should feel like a repeatable audit playbook you can apply under time pressure with calm, defensible reasoning. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This review episode reinforces Domain 3 by walking through the audit toolset you need—planning, criteria, testing methods, sampling, evidence integrity, analytics, and reporting—in a single connected flow that matches exam logic. You’ll revisit how to define scope around decision impact, convert policies and obligations into measurable criteria, select AI-aware audit techniques, and collect evidence that is traceable to model versions, data states, and change records. We’ll refresh sampling strategies that reveal bias and failure modes, and the integrity checks that prevent findings from being dismissed as “from a different version.” You’ll also reinforce how to communicate results with findings that connect cause, risk, evidence, and remediation, and how follow-up keeps improvements durable as models and data evolve. By the end, Domain 3 should feel like a repeatable audit playbook you can apply under time pressure with calm, defensible reasoning. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:28:29 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/538b7d66/4ff9bd7f.mp3" length="30659236" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>766</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This review episode reinforces Domain 3 by walking through the audit toolset you need—planning, criteria, testing methods, sampling, evidence integrity, analytics, and reporting—in a single connected flow that matches exam logic. You’ll revisit how to define scope around decision impact, convert policies and obligations into measurable criteria, select AI-aware audit techniques, and collect evidence that is traceable to model versions, data states, and change records. We’ll refresh sampling strategies that reveal bias and failure modes, and the integrity checks that prevent findings from being dismissed as “from a different version.” You’ll also reinforce how to communicate results with findings that connect cause, risk, evidence, and remediation, and how follow-up keeps improvements durable as models and data evolve. By the end, Domain 3 should feel like a repeatable audit playbook you can apply under time pressure with calm, defensible reasoning. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/538b7d66/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 111 — Spaced Retrieval Mega-Review: All 23 tasks in one connected storyline (Review: Tasks 1–23)</title>
      <itunes:episode>111</itunes:episode>
      <podcast:episode>111</podcast:episode>
      <itunes:title>Episode 111 — Spaced Retrieval Mega-Review: All 23 tasks in one connected storyline (Review: Tasks 1–23)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a5664bea-8437-4c98-b35a-961cf912d6f6</guid>
      <link>https://share.transistor.fm/s/4ff75363</link>
      <description>
        <![CDATA[<p>This mega-review pulls all 23 AAIA tasks into one connected storyline so you can recall them as a single audit narrative instead of a scattered checklist. You’ll revisit how tasks start with evaluating AI opportunities and impacts, then move into defining requirements and architecture fit, mapping risks to controls, and validating privacy, ethics, and compliance constraints. From there, you’ll connect lifecycle controls—data governance, development discipline, deployment gates, monitoring, supervision, security, vendor risk, and incident handling—into the evidence chain an auditor must be able to test. Finally, you’ll reinforce the audit-execution tasks: planning scope and criteria, choosing AI-aware testing techniques, sampling decisions to reveal bias and failure modes, validating evidence integrity across versions, and reporting findings that tie cause, risk, evidence, and remediation into action. Throughout, you’ll practice the exam-ready move that wins questions: identify the decision impact, state control intent, and select the evidence that proves it operates over time. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This mega-review pulls all 23 AAIA tasks into one connected storyline so you can recall them as a single audit narrative instead of a scattered checklist. You’ll revisit how tasks start with evaluating AI opportunities and impacts, then move into defining requirements and architecture fit, mapping risks to controls, and validating privacy, ethics, and compliance constraints. From there, you’ll connect lifecycle controls—data governance, development discipline, deployment gates, monitoring, supervision, security, vendor risk, and incident handling—into the evidence chain an auditor must be able to test. Finally, you’ll reinforce the audit-execution tasks: planning scope and criteria, choosing AI-aware testing techniques, sampling decisions to reveal bias and failure modes, validating evidence integrity across versions, and reporting findings that tie cause, risk, evidence, and remediation into action. Throughout, you’ll practice the exam-ready move that wins questions: identify the decision impact, state control intent, and select the evidence that proves it operates over time. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:28:53 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/4ff75363/c33850e6.mp3" length="30846271" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>770</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This mega-review pulls all 23 AAIA tasks into one connected storyline so you can recall them as a single audit narrative instead of a scattered checklist. You’ll revisit how tasks start with evaluating AI opportunities and impacts, then move into defining requirements and architecture fit, mapping risks to controls, and validating privacy, ethics, and compliance constraints. From there, you’ll connect lifecycle controls—data governance, development discipline, deployment gates, monitoring, supervision, security, vendor risk, and incident handling—into the evidence chain an auditor must be able to test. Finally, you’ll reinforce the audit-execution tasks: planning scope and criteria, choosing AI-aware testing techniques, sampling decisions to reveal bias and failure modes, validating evidence integrity across versions, and reporting findings that tie cause, risk, evidence, and remediation into action. Throughout, you’ll practice the exam-ready move that wins questions: identify the decision impact, state control intent, and select the evidence that proves it operates over time. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/4ff75363/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Episode 112 — Exam-Day Tactics: Calm, fast, defensible answers for AAIA scenarios (Exam-Day Tactics)</title>
      <itunes:episode>112</itunes:episode>
      <podcast:episode>112</podcast:episode>
      <itunes:title>Episode 112 — Exam-Day Tactics: Calm, fast, defensible answers for AAIA scenarios (Exam-Day Tactics)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d9b6bcf6-1fca-4572-a069-faf0b7481856</guid>
      <link>https://share.transistor.fm/s/4f405fc5</link>
      <description>
        <![CDATA[<p>This final episode gives you exam-day tactics that keep you calm, fast, and defensible when AAIA scenarios feel ambiguous or overloaded with details. You’ll learn a reliable pacing approach that prevents early-question time traps, plus a reading strategy that spots what the question is really testing: governance decision rights, risk treatment logic, lifecycle control points, evidence selection, or audit reporting quality. We’ll cover a practical elimination method that removes distractors by checking each option against control intent and accountability, especially when multiple answers seem “reasonable” on a technical level. You’ll also rehearse how to handle common stem patterns like “most appropriate next step,” “best evidence,” “primary risk,” and “most effective control,” without overthinking or drifting into vendor-specific assumptions. When you finish, you should have a simple operating mindset for the whole exam: anchor on decision impact, answer with evidence, and choose the option you can defend in an audit report. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This final episode gives you exam-day tactics that keep you calm, fast, and defensible when AAIA scenarios feel ambiguous or overloaded with details. You’ll learn a reliable pacing approach that prevents early-question time traps, plus a reading strategy that spots what the question is really testing: governance decision rights, risk treatment logic, lifecycle control points, evidence selection, or audit reporting quality. We’ll cover a practical elimination method that removes distractors by checking each option against control intent and accountability, especially when multiple answers seem “reasonable” on a technical level. You’ll also rehearse how to handle common stem patterns like “most appropriate next step,” “best evidence,” “primary risk,” and “most effective control,” without overthinking or drifting into vendor-specific assumptions. When you finish, you should have a simple operating mindset for the whole exam: anchor on decision impact, answer with evidence, and choose the option you can defend in an audit report. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:29:14 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/4f405fc5/5829d3c6.mp3" length="35129299" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>877</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This final episode gives you exam-day tactics that keep you calm, fast, and defensible when AAIA scenarios feel ambiguous or overloaded with details. You’ll learn a reliable pacing approach that prevents early-question time traps, plus a reading strategy that spots what the question is really testing: governance decision rights, risk treatment logic, lifecycle control points, evidence selection, or audit reporting quality. We’ll cover a practical elimination method that removes distractors by checking each option against control intent and accountability, especially when multiple answers seem “reasonable” on a technical level. You’ll also rehearse how to handle common stem patterns like “most appropriate next step,” “best evidence,” “primary risk,” and “most effective control,” without overthinking or drifting into vendor-specific assumptions. When you finish, you should have a simple operating mindset for the whole exam: anchor on decision impact, answer with evidence, and choose the option you can defend in an audit report. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/4f405fc5/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
    <item>
      <title>Welcome to the ISACA AAIA Audio Course</title>
      <itunes:episode>113</itunes:episode>
      <podcast:episode>113</podcast:episode>
      <itunes:title>Welcome to the ISACA AAIA Audio Course</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">adc0e8fb-5e83-45e4-8aa2-e6f465ed44df</guid>
      <link>https://share.transistor.fm/s/00fca752</link>
      <description>
        <![CDATA[<p>Certified: The ISACA AAIA Audio Course is an audio-first program built for working professionals who need a practical path into AI auditing. If you’re an internal auditor, risk manager, security leader, compliance professional, or governance practitioner who suddenly has “AI” on the agenda, this course is for you. You do not need to be a data scientist to follow along, but you should be ready to think like an assessor: what’s in scope, what evidence matters, and what “good” looks like when a system is partly automated and partly human. The focus stays on real-world audit work—planning, interviewing, testing, documenting, and reporting—so you can speak clearly with technical teams and still satisfy business and oversight expectations.</p><p>In Certified: The ISACA AAIA Audio Course, you’ll learn how to break AI systems into auditable components and evaluate them with a structured, repeatable approach. We cover governance and accountability, model risk and controls, data quality and lineage, third-party dependencies, security and privacy touchpoints, and the operational realities of monitoring and change management. The teaching style is built for audio: short explanations, plain language definitions, and walk-throughs that sound like how auditors actually think in the field. You’ll hear how to translate abstract requirements into testable criteria, what artifacts to request, how to spot gaps without guessing, and how to write findings that are specific, fair, and actionable.</p><p>What makes Certified: The ISACA AAIA Audio Course different is that it treats the certification as a professional skillset, not a trivia contest. Instead of drowning you in theory, we anchor each lesson in the decisions you’ll make on an engagement: how to scope an AI use case, what to test first, how to judge evidence, and how to explain risk in terms executives accept. Success looks like this: you can walk into an AI audit kickoff and sound prepared, you can build a defensible work program, and you can connect governance, controls, and outcomes in a way that holds up under review. By the end, you should feel ready to study with purpose and apply the same mindset on day one of your next audit.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Certified: The ISACA AAIA Audio Course is an audio-first program built for working professionals who need a practical path into AI auditing. If you’re an internal auditor, risk manager, security leader, compliance professional, or governance practitioner who suddenly has “AI” on the agenda, this course is for you. You do not need to be a data scientist to follow along, but you should be ready to think like an assessor: what’s in scope, what evidence matters, and what “good” looks like when a system is partly automated and partly human. The focus stays on real-world audit work—planning, interviewing, testing, documenting, and reporting—so you can speak clearly with technical teams and still satisfy business and oversight expectations.</p><p>In Certified: The ISACA AAIA Audio Course, you’ll learn how to break AI systems into auditable components and evaluate them with a structured, repeatable approach. We cover governance and accountability, model risk and controls, data quality and lineage, third-party dependencies, security and privacy touchpoints, and the operational realities of monitoring and change management. The teaching style is built for audio: short explanations, plain language definitions, and walk-throughs that sound like how auditors actually think in the field. You’ll hear how to translate abstract requirements into testable criteria, what artifacts to request, how to spot gaps without guessing, and how to write findings that are specific, fair, and actionable.</p><p>What makes Certified: The ISACA AAIA Audio Course different is that it treats the certification as a professional skillset, not a trivia contest. Instead of drowning you in theory, we anchor each lesson in the decisions you’ll make on an engagement: how to scope an AI use case, what to test first, how to judge evidence, and how to explain risk in terms executives accept. Success looks like this: you can walk into an AI audit kickoff and sound prepared, you can build a defensible work program, and you can connect governance, controls, and outcomes in a way that holds up under review. By the end, you should feel ready to study with purpose and apply the same mindset on day one of your next audit.</p>]]>
      </content:encoded>
      <pubDate>Sat, 14 Feb 2026 22:30:22 -0600</pubDate>
      <author>Jason Edwards</author>
      <enclosure url="https://media.transistor.fm/00fca752/12cc8503.mp3" length="526469" type="audio/mpeg"/>
      <itunes:author>Jason Edwards</itunes:author>
      <itunes:duration>60</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Certified: The ISACA AAIA Audio Course is an audio-first program built for working professionals who need a practical path into AI auditing. If you’re an internal auditor, risk manager, security leader, compliance professional, or governance practitioner who suddenly has “AI” on the agenda, this course is for you. You do not need to be a data scientist to follow along, but you should be ready to think like an assessor: what’s in scope, what evidence matters, and what “good” looks like when a system is partly automated and partly human. The focus stays on real-world audit work—planning, interviewing, testing, documenting, and reporting—so you can speak clearly with technical teams and still satisfy business and oversight expectations.</p><p>In Certified: The ISACA AAIA Audio Course, you’ll learn how to break AI systems into auditable components and evaluate them with a structured, repeatable approach. We cover governance and accountability, model risk and controls, data quality and lineage, third-party dependencies, security and privacy touchpoints, and the operational realities of monitoring and change management. The teaching style is built for audio: short explanations, plain language definitions, and walk-throughs that sound like how auditors actually think in the field. You’ll hear how to translate abstract requirements into testable criteria, what artifacts to request, how to spot gaps without guessing, and how to write findings that are specific, fair, and actionable.</p><p>What makes Certified: The ISACA AAIA Audio Course different is that it treats the certification as a professional skillset, not a trivia contest. Instead of drowning you in theory, we anchor each lesson in the decisions you’ll make on an engagement: how to scope an AI use case, what to test first, how to judge evidence, and how to explain risk in terms executives accept. Success looks like this: you can walk into an AI audit kickoff and sound prepared, you can build a defensible work program, and you can connect governance, controls, and outcomes in a way that holds up under review. By the end, you should feel ready to study with purpose and apply the same mindset on day one of your next audit.</p>]]>
      </itunes:summary>
      <itunes:keywords>ISACA AAIA, AI audit, AI assurance, model risk management, AI governance, algorithm accountability, data lineage, data quality controls, audit evidence, control testing, risk assessment, third-party AI vendors, model monitoring, bias and fairness assessment, explainability, transparency reporting, privacy impact, security controls, access management, change management, incident response, compliance and regulation, audit workpapers, executive reporting, internal audit training, certification prep</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/00fca752/transcript.srt" type="application/x-subrip" rel="captions"/>
    </item>
  </channel>
</rss>
