<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/stylesheet.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link rel="self" type="application/rss+xml" href="https://feeds.transistor.fm/pondering-ai" title="MP3 Audio"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    <podcast:podping usesPodping="true"/>
    <title>Pondering AI</title>
    <generator>Transistor (https://transistor.fm)</generator>
    <itunes:new-feed-url>https://feeds.transistor.fm/pondering-ai</itunes:new-feed-url>
    <description>How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.</description>
    <copyright>© 2026 SAS Institute Inc. All Rights Reserved.</copyright>
    <podcast:guid>e3771d00-12b0-597b-b821-b142d27f80c6</podcast:guid>
    <podcast:locked owner="podcast-admin@sas.com">no</podcast:locked>
    <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
    <language>en</language>
    <pubDate>Wed, 13 May 2026 05:00:25 -0400</pubDate>
    <lastBuildDate>Wed, 13 May 2026 05:01:27 -0400</lastBuildDate>
    <link>https://pondering-ai.transistor.fm/</link>
    
    <itunes:category text="Technology"/>
    <itunes:category text="Business"/>
    <itunes:type>episodic</itunes:type>
    <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
    <itunes:image href="https://img.transistorcdn.com/0cHNuqASM-WLwyA7VPa9z8qTnmBoyX9PKB1TPCFbg2c/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9zaG93/LzE5ODQ3LzE3MDE4/NzM1NTgtYXJ0d29y/ay5qcGc.jpg"/>
    <itunes:summary>How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.</itunes:summary>
    <itunes:subtitle>How is the use of artificial intelligence (AI) shaping our human experience.</itunes:subtitle>
    <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
    <itunes:owner>
      <itunes:name>SAS Podcast Admins</itunes:name>
    </itunes:owner>
    <itunes:complete>No</itunes:complete>
    <itunes:explicit>No</itunes:explicit>
    <item>
      <title>AI Literacy Is Not All We Need with Mel Sellick</title>
      <itunes:episode>95</itunes:episode>
      <podcast:episode>95</podcast:episode>
      <itunes:title>AI Literacy Is Not All We Need with Mel Sellick</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6578e33d-2509-4fdc-afbf-85235a7079ac</guid>
      <link>https://share.transistor.fm/s/fbdaf7ad</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/mel-sellick/">Mel Sellick</a> readies for AI by going beyond literacy to address the psychological, cognitive, and relational capacities required to ensure AI works for humans.</p><p>Mel and Kimberly discuss AI literacy vs. human readiness; the contours of human vulnerability; AI as a social actor; collective understanding and emotional regulation; instrumental AI dependency; the non-reciprocal nature of AI; the spectrum of relationality; human flourishing; attention, agency and alternate futures; positive friction in human systems; supportive social structures; cognitive offloading and debt; self-reflection and calibrating human needs.</p><p><a href="https://www.linkedin.com/in/mel-sellick/">Mel Sellick</a> is an applied psychologist specializing in Human-AI interaction. The Founder of the <a href="https://www.futurehumanlab.com/">Future Human Lab</a>, her Human Readiness Framework has shaped conversations in IEEE, UNESCO, Oxford, MIT, Harvard and beyond.</p><p>Additional Resources:</p><ul><li>Future Human Lab: <a href="https://www.futurehumanlab.com/">https://www.futurehumanlab.com/</a> </li><li>IEEE Organizational Readiness for Human-AI Interaction (Chair, SA-P7023) <a href="https://standards.ieee.org/ieee/7023/12394/">https://standards.ieee.org/ieee/7023/12394/</a></li><li>Oxford AI in Education Hub (AIEOU): <a href="https://aieou.web.ox.ac.uk/">https://aieou.web.ox.ac.uk/</a> </li><li>Harvard AI for Human Flourishing Council: <a href="https://hfh.fas.harvard.edu/ai-human-flourishing">https://hfh.fas.harvard.edu/ai-human-flourishing</a> </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep95/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/mel-sellick/">Mel Sellick</a> readies for AI by going beyond literacy to address the psychological, cognitive, and relational capacities required to ensure AI works for humans.</p><p>Mel and Kimberly discuss AI literacy vs. human readiness; the contours of human vulnerability; AI as a social actor; collective understanding and emotional regulation; instrumental AI dependency; the non-reciprocal nature of AI; the spectrum of relationality; human flourishing; attention, agency and alternate futures; positive friction in human systems; supportive social structures; cognitive offloading and debt; self-reflection and calibrating human needs.</p><p><a href="https://www.linkedin.com/in/mel-sellick/">Mel Sellick</a> is an applied psychologist specializing in Human-AI interaction. The Founder of the <a href="https://www.futurehumanlab.com/">Future Human Lab</a>, her Human Readiness Framework has shaped conversations in IEEE, UNESCO, Oxford, MIT, Harvard and beyond.</p><p>Additional Resources:</p><ul><li>Future Human Lab: <a href="https://www.futurehumanlab.com/">https://www.futurehumanlab.com/</a> </li><li>IEEE Organizational Readiness for Human-AI Interaction (Chair, SA-P7023) <a href="https://standards.ieee.org/ieee/7023/12394/">https://standards.ieee.org/ieee/7023/12394/</a></li><li>Oxford AI in Education Hub (AIEOU): <a href="https://aieou.web.ox.ac.uk/">https://aieou.web.ox.ac.uk/</a> </li><li>Harvard AI for Human Flourishing Council: <a href="https://hfh.fas.harvard.edu/ai-human-flourishing">https://hfh.fas.harvard.edu/ai-human-flourishing</a> </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep95/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 13 May 2026 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/fbdaf7ad/4317ee22.mp3" length="46964631" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/QzijaH2FMMRt6X5bTffKMYHCTOIloIABgTW_Uucm-Jk/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zMTE3/OWRiMmJhYWVmOWIx/NTI1MWE2YzAwYjgx/MzllMS5qcGc.jpg"/>
      <itunes:duration>2933</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/mel-sellick/">Mel Sellick</a> readies for AI by going beyond literacy to address the psychological, cognitive, and relational capacities required to ensure AI works for humans.</p><p>Mel and Kimberly discuss AI literacy vs. human readiness; the contours of human vulnerability; AI as a social actor; collective understanding and emotional regulation; instrumental AI dependency; the non-reciprocal nature of AI; the spectrum of relationality; human flourishing; attention, agency and alternate futures; positive friction in human systems; supportive social structures; cognitive offloading and debt; self-reflection and calibrating human needs.</p><p><a href="https://www.linkedin.com/in/mel-sellick/">Mel Sellick</a> is an applied psychologist specializing in Human-AI interaction. The Founder of the <a href="https://www.futurehumanlab.com/">Future Human Lab</a>, her Human Readiness Framework has shaped conversations in IEEE, UNESCO, Oxford, MIT, Harvard and beyond.</p><p>Additional Resources:</p><ul><li>Future Human Lab: <a href="https://www.futurehumanlab.com/">https://www.futurehumanlab.com/</a> </li><li>IEEE Organizational Readiness for Human-AI Interaction (Chair, SA-P7023) <a href="https://standards.ieee.org/ieee/7023/12394/">https://standards.ieee.org/ieee/7023/12394/</a></li><li>Oxford AI in Education Hub (AIEOU): <a href="https://aieou.web.ox.ac.uk/">https://aieou.web.ox.ac.uk/</a> </li><li>Harvard AI for Human Flourishing Council: <a href="https://hfh.fas.harvard.edu/ai-human-flourishing">https://hfh.fas.harvard.edu/ai-human-flourishing</a> </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep95/transcript">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/mel-sellick" img="https://img.transistorcdn.com/YfRVnGGuD1Y2iIAAnIrKBdwwJ_tMgh5DxpWq_f-SPu8/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wYjgx/Y2VlYTk0ZjUzY2Vj/ZmY2Y2NmYzRiZGFj/NjVmZC5qcGc.jpg">Mel Sellick</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/fbdaf7ad/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Human Premium with Drew Burdick</title>
      <itunes:episode>94</itunes:episode>
      <podcast:episode>94</podcast:episode>
      <itunes:title>The Human Premium with Drew Burdick</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">981fdb18-f48b-4439-918f-b6aaf43cdd3e</guid>
      <link>https://share.transistor.fm/s/f3e4d04b</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/drewhburdick/">Drew Burdick</a> designs AI systems to multiply human capacity, prioritizes great experiences, and values the serendipitous magic of human connection and collaboration.      </p><p><br>Kimberly and Drew discuss building with badass teams; curiosity and innovation; building momentum with AI; human relationships and rapport; proprietary knowledge and expertise; long-term thinking; AI agents as teammates; pricing in human experiences; designing for humans vs. bots; regulation and accountability; societal guardrails; the mid-market squeeze; actions companies should take now; investing in people; and keeping community front and center.</p><p><br><a href="https://www.linkedin.com/in/drewhburdick/">Drew Burdick</a> is the founder of <a href="https://stealthx.co/">StealthX</a> and the CLT Startup House. Drew parlays his deep background in design and solution development to help companies deliver exceptional experiences with AI.</p><p><br></p><p>Additional Resources:</p><ul><li>Building Great Experiences (podcast): <a href="https://stealthx.co/resources/podcast">https://stealthx.co/resources/podcast</a> </li><li>CLT Startup House: <a href="https://cltstartuphouse.com/">https://cltstartuphouse.com/</a> </li></ul><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep94/transcript">here</a>.   </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/drewhburdick/">Drew Burdick</a> designs AI systems to multiply human capacity, prioritizes great experiences, and values the serendipitous magic of human connection and collaboration.      </p><p><br>Kimberly and Drew discuss building with badass teams; curiosity and innovation; building momentum with AI; human relationships and rapport; proprietary knowledge and expertise; long-term thinking; AI agents as teammates; pricing in human experiences; designing for humans vs. bots; regulation and accountability; societal guardrails; the mid-market squeeze; actions companies should take now; investing in people; and keeping community front and center.</p><p><br><a href="https://www.linkedin.com/in/drewhburdick/">Drew Burdick</a> is the founder of <a href="https://stealthx.co/">StealthX</a> and the CLT Startup House. Drew parlays his deep background in design and solution development to help companies deliver exceptional experiences with AI.</p><p><br></p><p>Additional Resources:</p><ul><li>Building Great Experiences (podcast): <a href="https://stealthx.co/resources/podcast">https://stealthx.co/resources/podcast</a> </li><li>CLT Startup House: <a href="https://cltstartuphouse.com/">https://cltstartuphouse.com/</a> </li></ul><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep94/transcript">here</a>.   </p>]]>
      </content:encoded>
      <pubDate>Wed, 15 Apr 2026 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/f3e4d04b/30b8c943.mp3" length="46297484" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/kfmOfkJRkNwbTJQFNu2jkA499dd7HlIm91cmohm89Zc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wODMy/OGFkMzU5NmE5ZTFl/YTZmNzc2ZDM2Yjlk/MjE3MC5qcGc.jpg"/>
      <itunes:duration>2891</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/drewhburdick/">Drew Burdick</a> designs AI systems to multiply human capacity, prioritizes great experiences, and values the serendipitous magic of human connection and collaboration.      </p><p><br>Kimberly and Drew discuss building with badass teams; curiosity and innovation; building momentum with AI; human relationships and rapport; proprietary knowledge and expertise; long-term thinking; AI agents as teammates; pricing in human experiences; designing for humans vs. bots; regulation and accountability; societal guardrails; the mid-market squeeze; actions companies should take now; investing in people; and keeping community front and center.</p><p><br><a href="https://www.linkedin.com/in/drewhburdick/">Drew Burdick</a> is the founder of <a href="https://stealthx.co/">StealthX</a> and the CLT Startup House. Drew parlays his deep background in design and solution development to help companies deliver exceptional experiences with AI.</p><p><br></p><p>Additional Resources:</p><ul><li>Building Great Experiences (podcast): <a href="https://stealthx.co/resources/podcast">https://stealthx.co/resources/podcast</a> </li><li>CLT Startup House: <a href="https://cltstartuphouse.com/">https://cltstartuphouse.com/</a> </li></ul><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep94/transcript">here</a>.   </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/drew-burdick" img="https://img.transistorcdn.com/eoKRBOcS5uHceOIk554XybvSE9CN00AWZX0JGUn1HHs/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iZDQ5/OTkyMTJjMzI4ZGZk/Nzk1ZWZmMjBjNzFj/YWFmOC5qcGc.jpg">Drew Burdick</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/f3e4d04b/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Minding Our Minds with Helen and Dave Edwards </title>
      <itunes:episode>93</itunes:episode>
      <podcast:episode>93</podcast:episode>
      <itunes:title>Minding Our Minds with Helen and Dave Edwards </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">07a5ba8c-b9b4-4625-a9d1-2277c10c1c88</guid>
      <link>https://share.transistor.fm/s/1fc5d6bf</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/helenedwardskiwi/">Helen Edwards</a> and <a href="https://www.linkedin.com/in/daveedwards2/">Dave Edwards</a> are in awe of AI and passionate about course correcting to preserve human authorship and ensure AI systems are for, not from, people.  </p><p>Kimberly, Dave and Helen discuss human authorship; AI as a cultural technology; generative AI as a cognitive tool; biological imperatives and cultural as a countervailing force; cognitive pairing; metacognition and why perception matters most; diverse intelligences; precautionary design principles; AI as a co-evolutionary factor; human finitude; changing AI’s course; creating minds for our minds; designing for the unknown; kindness and hopeful rebellion. </p><p><a href="https://www.linkedin.com/in/helenedwardskiwi/">Helen Edwards</a> and <a href="https://www.linkedin.com/in/daveedwards2/">Dave Edwards</a> are the co-founders of the <a href="https://www.artificialityinstitute.org/">Artificiality Institute</a>, a nonprofit organization shaping the future human experience of AI. Their longitudinal research program The Chronicle tracks how people actually experience AI in their work and lives. </p><p>Additional Resources: </p><ul><li>Staying Human: Authoring Your Mind in the Age of AI (digital book): <a href="https://journal.artificialityinstitute.org/tag/book-two/">https://journal.artificialityinstitute.org/tag/book-two/</a>  </li><li>The SaaS Apocalypse Is A Category Error (article): <a href="https://journal.artificialityinstitute.org/the-saas-apocalypse-is-a-category-error/">https://journal.artificialityinstitute.org/the-saas-apocalypse-is-a-category-error/</a>  </li></ul><p><br></p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep93/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/helenedwardskiwi/">Helen Edwards</a> and <a href="https://www.linkedin.com/in/daveedwards2/">Dave Edwards</a> are in awe of AI and passionate about course correcting to preserve human authorship and ensure AI systems are for, not from, people.  </p><p>Kimberly, Dave and Helen discuss human authorship; AI as a cultural technology; generative AI as a cognitive tool; biological imperatives and cultural as a countervailing force; cognitive pairing; metacognition and why perception matters most; diverse intelligences; precautionary design principles; AI as a co-evolutionary factor; human finitude; changing AI’s course; creating minds for our minds; designing for the unknown; kindness and hopeful rebellion. </p><p><a href="https://www.linkedin.com/in/helenedwardskiwi/">Helen Edwards</a> and <a href="https://www.linkedin.com/in/daveedwards2/">Dave Edwards</a> are the co-founders of the <a href="https://www.artificialityinstitute.org/">Artificiality Institute</a>, a nonprofit organization shaping the future human experience of AI. Their longitudinal research program The Chronicle tracks how people actually experience AI in their work and lives. </p><p>Additional Resources: </p><ul><li>Staying Human: Authoring Your Mind in the Age of AI (digital book): <a href="https://journal.artificialityinstitute.org/tag/book-two/">https://journal.artificialityinstitute.org/tag/book-two/</a>  </li><li>The SaaS Apocalypse Is A Category Error (article): <a href="https://journal.artificialityinstitute.org/the-saas-apocalypse-is-a-category-error/">https://journal.artificialityinstitute.org/the-saas-apocalypse-is-a-category-error/</a>  </li></ul><p><br></p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep93/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 01 Apr 2026 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/1fc5d6bf/a28fbd3c.mp3" length="56859989" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/SNj5uYzEjoNcI47mrHMwh_dD2S7feNPuNMFM8eTg7rU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82MjVl/MTBmMGRiZTI4ZDVk/NGI5YjY1OTdjYzZk/Yzg4MS5qcGc.jpg"/>
      <itunes:duration>3551</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/helenedwardskiwi/">Helen Edwards</a> and <a href="https://www.linkedin.com/in/daveedwards2/">Dave Edwards</a> are in awe of AI and passionate about course correcting to preserve human authorship and ensure AI systems are for, not from, people.  </p><p>Kimberly, Dave and Helen discuss human authorship; AI as a cultural technology; generative AI as a cognitive tool; biological imperatives and cultural as a countervailing force; cognitive pairing; metacognition and why perception matters most; diverse intelligences; precautionary design principles; AI as a co-evolutionary factor; human finitude; changing AI’s course; creating minds for our minds; designing for the unknown; kindness and hopeful rebellion. </p><p><a href="https://www.linkedin.com/in/helenedwardskiwi/">Helen Edwards</a> and <a href="https://www.linkedin.com/in/daveedwards2/">Dave Edwards</a> are the co-founders of the <a href="https://www.artificialityinstitute.org/">Artificiality Institute</a>, a nonprofit organization shaping the future human experience of AI. Their longitudinal research program The Chronicle tracks how people actually experience AI in their work and lives. </p><p>Additional Resources: </p><ul><li>Staying Human: Authoring Your Mind in the Age of AI (digital book): <a href="https://journal.artificialityinstitute.org/tag/book-two/">https://journal.artificialityinstitute.org/tag/book-two/</a>  </li><li>The SaaS Apocalypse Is A Category Error (article): <a href="https://journal.artificialityinstitute.org/the-saas-apocalypse-is-a-category-error/">https://journal.artificialityinstitute.org/the-saas-apocalypse-is-a-category-error/</a>  </li></ul><p><br></p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep93/transcript">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/dave-edwards" img="https://img.transistorcdn.com/I47KTw-rvHRClvgnQ3Jy48U8HeI8u2UygWesWwgWTOQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jMjg3/YTY1ZTFjYTUzYWEw/MWIxOGE5ZThmNjZh/YTRlNS5qcGVn.jpg">Dave Edwards</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/helen-edwards" img="https://img.transistorcdn.com/dSUnpeP_nLNA7lMhyLqRfaa9vvjQX5_b3xoEr0Khc4k/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yZTVl/ZWVmYzE4M2E3ZWYy/NTg3NDhlYTZkYTkz/ODUzOC5qcGVn.jpg">Helen Edwards</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/1fc5d6bf/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Prioritizing Public Interest with Afua Bruce</title>
      <itunes:episode>92</itunes:episode>
      <podcast:episode>92</podcast:episode>
      <itunes:title>Prioritizing Public Interest with Afua Bruce</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">18cf27a4-aeaa-4b61-a86f-a37b60295f3d</guid>
      <link>https://share.transistor.fm/s/c7934bfd</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/afua-bruce/">Afua Bruce</a> explains that public interest tech is about solving complicated problems with real impact for real people, not just fuzzy feelings and philanthropy.      </p><p>Afua and Kimberly discuss misconceptions about Public Interest Tech (PIT); PIT beyond philanthropy; why tech for good isn’t always; purposeful productivity; “solving” non-profits; tech funding traps; PIT design principles; cross-sector career paths; participatory (vacation) design; the messy middle; focusing on impact; responsible investment; and knowing we still have time. </p><p><a href="https://www.linkedin.com/in/afua-bruce/">Afua Bruce</a> is the founder and CEO of <a href="http://www.anbadvisory.com/">ANB Advisory Group</a>. An author and leading public interest technologist, Afua works with philanthropic institutions, tech companies, and nonprofits to develop and use responsible technology. </p><p>Additional Resources:</p><ul><li>The Tech That Comes Next (book):  <a href="https://thetechthatcomesnext.com/">https://thetechthatcomesnext.com/</a>  </li><li>Dr. Catherine Nakalembe Ted Talk: <a href="https://www.ted.com/talks/catherine_nakalembe_why_can_t_we_better_prepare_for_extreme_weather">https://www.ted.com/talks/catherine_nakalembe_why_can_t_we_better_prepare_for_extreme_weather</a> </li><li>Humane Intelligence (non-profit): <a href="https://humane-intelligence.org/">https://humane-intelligence.org/</a> </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep92/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/afua-bruce/">Afua Bruce</a> explains that public interest tech is about solving complicated problems with real impact for real people, not just fuzzy feelings and philanthropy.      </p><p>Afua and Kimberly discuss misconceptions about Public Interest Tech (PIT); PIT beyond philanthropy; why tech for good isn’t always; purposeful productivity; “solving” non-profits; tech funding traps; PIT design principles; cross-sector career paths; participatory (vacation) design; the messy middle; focusing on impact; responsible investment; and knowing we still have time. </p><p><a href="https://www.linkedin.com/in/afua-bruce/">Afua Bruce</a> is the founder and CEO of <a href="http://www.anbadvisory.com/">ANB Advisory Group</a>. An author and leading public interest technologist, Afua works with philanthropic institutions, tech companies, and nonprofits to develop and use responsible technology. </p><p>Additional Resources:</p><ul><li>The Tech That Comes Next (book):  <a href="https://thetechthatcomesnext.com/">https://thetechthatcomesnext.com/</a>  </li><li>Dr. Catherine Nakalembe Ted Talk: <a href="https://www.ted.com/talks/catherine_nakalembe_why_can_t_we_better_prepare_for_extreme_weather">https://www.ted.com/talks/catherine_nakalembe_why_can_t_we_better_prepare_for_extreme_weather</a> </li><li>Humane Intelligence (non-profit): <a href="https://humane-intelligence.org/">https://humane-intelligence.org/</a> </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep92/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 18 Mar 2026 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/c7934bfd/9d5d707b.mp3" length="45516113" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/T_4zB80x67h5uBw2HDGEQYAtHJnLGKZnP_fagvjq7zQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zNDE5/MDhjMWU4ODlkMTg4/MmM2ZGY4MzkyN2Fk/N2FlYS5qcGc.jpg"/>
      <itunes:duration>2842</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/afua-bruce/">Afua Bruce</a> explains that public interest tech is about solving complicated problems with real impact for real people, not just fuzzy feelings and philanthropy.      </p><p>Afua and Kimberly discuss misconceptions about Public Interest Tech (PIT); PIT beyond philanthropy; why tech for good isn’t always; purposeful productivity; “solving” non-profits; tech funding traps; PIT design principles; cross-sector career paths; participatory (vacation) design; the messy middle; focusing on impact; responsible investment; and knowing we still have time. </p><p><a href="https://www.linkedin.com/in/afua-bruce/">Afua Bruce</a> is the founder and CEO of <a href="http://www.anbadvisory.com/">ANB Advisory Group</a>. An author and leading public interest technologist, Afua works with philanthropic institutions, tech companies, and nonprofits to develop and use responsible technology. </p><p>Additional Resources:</p><ul><li>The Tech That Comes Next (book):  <a href="https://thetechthatcomesnext.com/">https://thetechthatcomesnext.com/</a>  </li><li>Dr. Catherine Nakalembe Ted Talk: <a href="https://www.ted.com/talks/catherine_nakalembe_why_can_t_we_better_prepare_for_extreme_weather">https://www.ted.com/talks/catherine_nakalembe_why_can_t_we_better_prepare_for_extreme_weather</a> </li><li>Humane Intelligence (non-profit): <a href="https://humane-intelligence.org/">https://humane-intelligence.org/</a> </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep92/transcript">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/afua-bruce" img="https://img.transistorcdn.com/Neh5bj7QpBxhMd4mAnA_fl6usBqJw4weuf_sCnXoBAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84NmNj/ZjI2M2FjMGMxYzc1/Yjc1NjkwMDc4OGRl/M2YwMi5qcGc.jpg">Afua Bruce</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/c7934bfd/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>A Student’s Perspective with Seth Rabinowitz</title>
      <itunes:episode>91</itunes:episode>
      <podcast:episode>91</podcast:episode>
      <itunes:title>A Student’s Perspective with Seth Rabinowitz</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7b9f5516-083c-4b61-b7b3-af73ef6c6508</guid>
      <link>https://share.transistor.fm/s/11e2cbab</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/seth-rabinowitz-6982ab193/">Seth Rabinowitz</a> uses AI with intent by studiously prioritizing learning, actively resisting dependency, promoting ethical practices, and seeing people in the data.   </p><p> </p><p>Seth and Kimberly discuss his shift from fearing AI to fearing (some) people using AI; expertise and critical thinking; how different cohorts use AI; resisting dependency and intentional use; the role of educators; developing soft skills; not confusing AI’s learning with your own; stewarding AI; business ethics and data privacy; prioritizing AI fundamentals and putting people first.</p><p><a href="https://www.linkedin.com/in/seth-rabinowitz-6982ab193/">Seth Rabinowitz</a> is pursuing a Master’s degree in <a href="https://dsba.charlotte.edu/">Data Science and Business Analytics</a> at UNC Charlotte.</p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep91/transcript">here</a>.   </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/seth-rabinowitz-6982ab193/">Seth Rabinowitz</a> uses AI with intent by studiously prioritizing learning, actively resisting dependency, promoting ethical practices, and seeing people in the data.   </p><p> </p><p>Seth and Kimberly discuss his shift from fearing AI to fearing (some) people using AI; expertise and critical thinking; how different cohorts use AI; resisting dependency and intentional use; the role of educators; developing soft skills; not confusing AI’s learning with your own; stewarding AI; business ethics and data privacy; prioritizing AI fundamentals and putting people first.</p><p><a href="https://www.linkedin.com/in/seth-rabinowitz-6982ab193/">Seth Rabinowitz</a> is pursuing a Master’s degree in <a href="https://dsba.charlotte.edu/">Data Science and Business Analytics</a> at UNC Charlotte.</p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep91/transcript">here</a>.   </p>]]>
      </content:encoded>
      <pubDate>Wed, 04 Mar 2026 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/11e2cbab/205d61a1.mp3" length="37747413" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Pk3j-4YcQbeehufivUsIZIpDjKBRnnEZoaRkffJ9mJo/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xYjBh/ODM2OTg1YTIwZGI0/YjliYzViZDM4ZTk1/MzBhZC5qcGc.jpg"/>
      <itunes:duration>2357</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/seth-rabinowitz-6982ab193/">Seth Rabinowitz</a> uses AI with intent by studiously prioritizing learning, actively resisting dependency, promoting ethical practices, and seeing people in the data.   </p><p> </p><p>Seth and Kimberly discuss his shift from fearing AI to fearing (some) people using AI; expertise and critical thinking; how different cohorts use AI; resisting dependency and intentional use; the role of educators; developing soft skills; not confusing AI’s learning with your own; stewarding AI; business ethics and data privacy; prioritizing AI fundamentals and putting people first.</p><p><a href="https://www.linkedin.com/in/seth-rabinowitz-6982ab193/">Seth Rabinowitz</a> is pursuing a Master’s degree in <a href="https://dsba.charlotte.edu/">Data Science and Business Analytics</a> at UNC Charlotte.</p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep91/transcript">here</a>.   </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/seth-rabinowitz" img="https://img.transistorcdn.com/hYyQ-VeIX3KE-l34VEYeAIT_vpTg4eS3VWE2edKe17g/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82NDgx/ZmYyNDRiZGJlMjI1/YmYyNzU2MDgxMDA4/YzE2Yy5qcGc.jpg">Seth Rabinowitz</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/11e2cbab/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Orchestrating Public Sector AI with Taka Ariga</title>
      <itunes:episode>90</itunes:episode>
      <podcast:episode>90</podcast:episode>
      <itunes:title>Orchestrating Public Sector AI with Taka Ariga</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">25059fbb-12ae-4c93-bf63-0e847603f01e</guid>
      <link>https://share.transistor.fm/s/140b5f80</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/takaariga/">Taka Ariga</a> hits all the right notes for AI at scale: clarity of purpose, strong foundations, sustainable innovation, engaged ownership, and a confident workforce.   </p><p> </p><p>Taka and Kimberly discuss going beyond novel AI prototypes; the limits of automation; context building; data sovereignty and integrity; the unstructured data deluge; the unique sensitivities and needs of public agencies; valuing ownership and viable ways to scale; plagiarizing for good; foundations for AI success; wanting innovation without change; rethinking governance; enabling confident AI use; making space for reinvention; and being a skeptical AI advocate.</p><p><a href="https://www.linkedin.com/in/takaariga/">Taka Ariga</a> is a heretical technologist and the founder of <a href="https://sol-imagination.ai/">Sol Imagination</a>. He focuses on AI strategy design, implementation, and value capture. Taka served the US Office of Personnel Management (OPM) as CDO and CAIO and the US Government Accountability Office (GAO) as Chief Data Scientist and Director of the Innovation Lab.</p><p> </p><p>Related Resources:</p><ul><li>Sol Imagination (company)                  <a href="https://sol-imagination.ai/">https://sol-imagination.ai/</a>  </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep90/transcript">here</a>.   </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/takaariga/">Taka Ariga</a> hits all the right notes for AI at scale: clarity of purpose, strong foundations, sustainable innovation, engaged ownership, and a confident workforce.   </p><p> </p><p>Taka and Kimberly discuss going beyond novel AI prototypes; the limits of automation; context building; data sovereignty and integrity; the unstructured data deluge; the unique sensitivities and needs of public agencies; valuing ownership and viable ways to scale; plagiarizing for good; foundations for AI success; wanting innovation without change; rethinking governance; enabling confident AI use; making space for reinvention; and being a skeptical AI advocate.</p><p><a href="https://www.linkedin.com/in/takaariga/">Taka Ariga</a> is a heretical technologist and the founder of <a href="https://sol-imagination.ai/">Sol Imagination</a>. He focuses on AI strategy design, implementation, and value capture. Taka served the US Office of Personnel Management (OPM) as CDO and CAIO and the US Government Accountability Office (GAO) as Chief Data Scientist and Director of the Innovation Lab.</p><p> </p><p>Related Resources:</p><ul><li>Sol Imagination (company)                  <a href="https://sol-imagination.ai/">https://sol-imagination.ai/</a>  </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep90/transcript">here</a>.   </p>]]>
      </content:encoded>
      <pubDate>Wed, 18 Feb 2026 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/140b5f80/08c351c3.mp3" length="54641660" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Zt4M6xyxq4mGwIs8ZsVPV-rTypKNCvqRsdsv-mlE9Fc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZjc5/NGIwZThhYzBkMmYw/ODdlM2E2YTEwNzRi/MDM5NS5qcGc.jpg"/>
      <itunes:duration>3413</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/takaariga/">Taka Ariga</a> hits all the right notes for AI at scale: clarity of purpose, strong foundations, sustainable innovation, engaged ownership, and a confident workforce.   </p><p> </p><p>Taka and Kimberly discuss going beyond novel AI prototypes; the limits of automation; context building; data sovereignty and integrity; the unstructured data deluge; the unique sensitivities and needs of public agencies; valuing ownership and viable ways to scale; plagiarizing for good; foundations for AI success; wanting innovation without change; rethinking governance; enabling confident AI use; making space for reinvention; and being a skeptical AI advocate.</p><p><a href="https://www.linkedin.com/in/takaariga/">Taka Ariga</a> is a heretical technologist and the founder of <a href="https://sol-imagination.ai/">Sol Imagination</a>. He focuses on AI strategy design, implementation, and value capture. Taka served the US Office of Personnel Management (OPM) as CDO and CAIO and the US Government Accountability Office (GAO) as Chief Data Scientist and Director of the Innovation Lab.</p><p> </p><p>Related Resources:</p><ul><li>Sol Imagination (company)                  <a href="https://sol-imagination.ai/">https://sol-imagination.ai/</a>  </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep90/transcript">here</a>.   </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/taka-ariga" img="https://img.transistorcdn.com/Q0p99-xEsgeKT_dIlHLOQEbT93G8u3HzbiFWS6cZWtY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82NjM0/MGEzYzMzMGQwZDkz/MzRmOGU0MWEwNzg5/ZWU5OS5qcGc.jpg">Taka Ariga</podcast:person>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/140b5f80/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Navigating AI in Banking with Theodora Lau</title>
      <itunes:episode>89</itunes:episode>
      <podcast:episode>89</podcast:episode>
      <itunes:title>Navigating AI in Banking with Theodora Lau</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bbdc1f49-fe06-48d3-a6fe-3b4d972d1a31</guid>
      <link>https://share.transistor.fm/s/a72cef47</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/theodoralau/">Theodora Lau</a> banks on AI becoming our financial GPS and OS but flags required waypoints to protect consumer data rights, maintain trust and close the digital divide.</p><p>Theo and Kimberly discuss the progression toward a financial GPS powered by AI; consumer data rights and trust; the billion dollar question for 2026; analog identify verification; reducing risk and improving the customer experience; valuing people above transactions; the widening digital divide; upskilling and reskilling; cultivating curiosity and reclaiming time; financial security as the foundation for health; agentic commerce and AI as the financial OS; and always being human.</p><p><a href="https://www.linkedin.com/in/theodoralau/">Theodora Lau</a> is the Founder of Unconventional Ventures. A prolific speaker, author and advisor, Theo is an American Banker’s Top 20 Influential Women in FinTech. Recognizing that health and financial security are innately entwined, Theo works to spark innovation in the public and private sectors to meet the needs of underrepresented consumers.</p><p> </p><p>Related Resources:</p><ul><li><a href="https://www.bankingonaibook.com/">Banking on (Artificial) Intelligence</a> (book)</li><li><a href="https://feeds.acast.com/public/shows/one-vision">One Vision Podcast</a> (RSS feed)</li><li><a href="https://www.unconventionalventures.com/">Unconventional Ventures</a> (company)</li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep89/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/theodoralau/">Theodora Lau</a> banks on AI becoming our financial GPS and OS but flags required waypoints to protect consumer data rights, maintain trust and close the digital divide.</p><p>Theo and Kimberly discuss the progression toward a financial GPS powered by AI; consumer data rights and trust; the billion dollar question for 2026; analog identify verification; reducing risk and improving the customer experience; valuing people above transactions; the widening digital divide; upskilling and reskilling; cultivating curiosity and reclaiming time; financial security as the foundation for health; agentic commerce and AI as the financial OS; and always being human.</p><p><a href="https://www.linkedin.com/in/theodoralau/">Theodora Lau</a> is the Founder of Unconventional Ventures. A prolific speaker, author and advisor, Theo is an American Banker’s Top 20 Influential Women in FinTech. Recognizing that health and financial security are innately entwined, Theo works to spark innovation in the public and private sectors to meet the needs of underrepresented consumers.</p><p> </p><p>Related Resources:</p><ul><li><a href="https://www.bankingonaibook.com/">Banking on (Artificial) Intelligence</a> (book)</li><li><a href="https://feeds.acast.com/public/shows/one-vision">One Vision Podcast</a> (RSS feed)</li><li><a href="https://www.unconventionalventures.com/">Unconventional Ventures</a> (company)</li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep89/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 04 Feb 2026 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/a72cef47/5d6aa449.mp3" length="50271765" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/rf0TDcfbrXtEZutHrUF1DbpvKALbIOy4aFjD7LFXMdA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mMjk3/ZTI4Mzg5Y2YxMWVm/ZGY0NTc3NGIzMWQy/ZGZjMi5qcGc.jpg"/>
      <itunes:duration>3140</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/theodoralau/">Theodora Lau</a> banks on AI becoming our financial GPS and OS but flags required waypoints to protect consumer data rights, maintain trust and close the digital divide.</p><p>Theo and Kimberly discuss the progression toward a financial GPS powered by AI; consumer data rights and trust; the billion dollar question for 2026; analog identify verification; reducing risk and improving the customer experience; valuing people above transactions; the widening digital divide; upskilling and reskilling; cultivating curiosity and reclaiming time; financial security as the foundation for health; agentic commerce and AI as the financial OS; and always being human.</p><p><a href="https://www.linkedin.com/in/theodoralau/">Theodora Lau</a> is the Founder of Unconventional Ventures. A prolific speaker, author and advisor, Theo is an American Banker’s Top 20 Influential Women in FinTech. Recognizing that health and financial security are innately entwined, Theo works to spark innovation in the public and private sectors to meet the needs of underrepresented consumers.</p><p> </p><p>Related Resources:</p><ul><li><a href="https://www.bankingonaibook.com/">Banking on (Artificial) Intelligence</a> (book)</li><li><a href="https://feeds.acast.com/public/shows/one-vision">One Vision Podcast</a> (RSS feed)</li><li><a href="https://www.unconventionalventures.com/">Unconventional Ventures</a> (company)</li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep89/transcript">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/theodora-lau" img="https://img.transistorcdn.com/yu2vK3fgD2Mj0kB4iIcaKkh7SywUxfqrWl37EYw92P8/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81MjVh/ZjljODcyYTNmMmI5/MjQ3MDYzNDBhNzk2/OWRiNS5qcGc.jpg">Theodora Lau</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/a72cef47/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>AI Is As Data Does with Gretchen Stewart</title>
      <itunes:episode>88</itunes:episode>
      <podcast:episode>88</podcast:episode>
      <itunes:title>AI Is As Data Does with Gretchen Stewart</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">da25ef32-948f-4ad0-bec6-a4ba32c7ff4a</guid>
      <link>https://share.transistor.fm/s/a549d841</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/gretchen-stewart-5a33b31/">Gretchen Stewart</a> knows she doesn’t know it all, always asks why, challenges oversimplified AI stories, champions multi-disciplinary teams and doubles down on data.  </p><p> </p><p>Gretchen and Kimberly discuss conflating GenAI with AI, data as the underpinning for all things AI, workflow engineering, AI as a team sport, organizational and data siloes, programming as a valued skill, agentic AI and workforce reductions, the complexity inherent in an interconnected world, data volume vs. quality, backsliding on governance, not knowing it all and diversity as a force multiplier.</p><p><br><a href="https://www.linkedin.com/in/gretchen-stewart-5a33b31/">Gretchen Stewart</a> is a Principal Engineer at <a href="https://www.intel.com/content/www/us/en/artificial-intelligence/overview.html">Intel</a>. She serves as the Chief Data Scientist for the public sector and is a member of the enterprise HPC and AI architecture team. A self-professed human to geek translator, Gretchen was recently nominated as a Top 100 Data and AI Leader by OnConferences.</p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep88/transcript">here</a>.   </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/gretchen-stewart-5a33b31/">Gretchen Stewart</a> knows she doesn’t know it all, always asks why, challenges oversimplified AI stories, champions multi-disciplinary teams and doubles down on data.  </p><p> </p><p>Gretchen and Kimberly discuss conflating GenAI with AI, data as the underpinning for all things AI, workflow engineering, AI as a team sport, organizational and data siloes, programming as a valued skill, agentic AI and workforce reductions, the complexity inherent in an interconnected world, data volume vs. quality, backsliding on governance, not knowing it all and diversity as a force multiplier.</p><p><br><a href="https://www.linkedin.com/in/gretchen-stewart-5a33b31/">Gretchen Stewart</a> is a Principal Engineer at <a href="https://www.intel.com/content/www/us/en/artificial-intelligence/overview.html">Intel</a>. She serves as the Chief Data Scientist for the public sector and is a member of the enterprise HPC and AI architecture team. A self-professed human to geek translator, Gretchen was recently nominated as a Top 100 Data and AI Leader by OnConferences.</p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep88/transcript">here</a>.   </p>]]>
      </content:encoded>
      <pubDate>Wed, 21 Jan 2026 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/a549d841/8045ad36.mp3" length="45190148" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/3txET-gXviUIheK-jMtxmXbfM3uaKapgCZGJRvzX39k/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mNDY4/M2I3ZTRiNWZmZDY4/MTg4ZWM0MzU3ZDUy/Y2Q0ZS5qcGc.jpg"/>
      <itunes:duration>2823</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/gretchen-stewart-5a33b31/">Gretchen Stewart</a> knows she doesn’t know it all, always asks why, challenges oversimplified AI stories, champions multi-disciplinary teams and doubles down on data.  </p><p> </p><p>Gretchen and Kimberly discuss conflating GenAI with AI, data as the underpinning for all things AI, workflow engineering, AI as a team sport, organizational and data siloes, programming as a valued skill, agentic AI and workforce reductions, the complexity inherent in an interconnected world, data volume vs. quality, backsliding on governance, not knowing it all and diversity as a force multiplier.</p><p><br><a href="https://www.linkedin.com/in/gretchen-stewart-5a33b31/">Gretchen Stewart</a> is a Principal Engineer at <a href="https://www.intel.com/content/www/us/en/artificial-intelligence/overview.html">Intel</a>. She serves as the Chief Data Scientist for the public sector and is a member of the enterprise HPC and AI architecture team. A self-professed human to geek translator, Gretchen was recently nominated as a Top 100 Data and AI Leader by OnConferences.</p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep88/transcript">here</a>.   </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/gretchen-stewart" img="https://img.transistorcdn.com/iyBSBrfC0M_a4Iw4xHqSOqh6cpwGy9uaVE1n0h29FdY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yZDIw/N2ZlZTZhZTE4YTI1/OWNjNTFlMzVmYjc4/OWU1Zi5qcGc.jpg">Gretchen Stewart</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/a549d841/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>An AI Assessment with Chris Marshall</title>
      <itunes:episode>87</itunes:episode>
      <podcast:episode>87</podcast:episode>
      <itunes:title>An AI Assessment with Chris Marshall</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a647b58d-717d-40ea-8ab6-82275b795cf4</guid>
      <link>https://share.transistor.fm/s/db2a2c95</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/clmarshall1/">Dr. Chris Marshall</a> analyzes AI from all angles including market dynamics, geopolitical concerns, workforce impacts, and what staying the course with agentic AI requires.</p><p>Chris and Kimberly discuss his journey from theoretical physics to analytic philosophy, AI as an economic and geopolitical concern, the rise of sovereign AI, scale economies, market bubbles and expectation gaps, the AI value horizon, why agentic AI is harder than GenAI, calibrating risk and justifying trust, expertise and the workforce, not overlooking Rodney Dangerfield, foundational elements for success, betting on AIOps, and acting in teams.     </p><p><br><a href="https://www.linkedin.com/in/clmarshall1/">Dr. Chris L Marshall</a> is a Vice President at <a href="https://www.idc.com/ap/home/">IDC Asia/Pacific</a> with responsibility for industry insights, data, analytics and AI. A former partner and executive at companies such as IBM, KPMG, Oracle, FIS, and UBS, Chris’s mission is to translate innovative technologies into industry insights and business value for the digital economy.</p><p>Related Resources</p><ul><li><a href="https://www.sas.com/en_us/news/analyst-viewpoints/idc-data-ai-impact-report.html">Data and AI Impact Report: The Trust Imperative</a> (IDC Research)</li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep87/transcript"><strong>here</strong></a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/clmarshall1/">Dr. Chris Marshall</a> analyzes AI from all angles including market dynamics, geopolitical concerns, workforce impacts, and what staying the course with agentic AI requires.</p><p>Chris and Kimberly discuss his journey from theoretical physics to analytic philosophy, AI as an economic and geopolitical concern, the rise of sovereign AI, scale economies, market bubbles and expectation gaps, the AI value horizon, why agentic AI is harder than GenAI, calibrating risk and justifying trust, expertise and the workforce, not overlooking Rodney Dangerfield, foundational elements for success, betting on AIOps, and acting in teams.     </p><p><br><a href="https://www.linkedin.com/in/clmarshall1/">Dr. Chris L Marshall</a> is a Vice President at <a href="https://www.idc.com/ap/home/">IDC Asia/Pacific</a> with responsibility for industry insights, data, analytics and AI. A former partner and executive at companies such as IBM, KPMG, Oracle, FIS, and UBS, Chris’s mission is to translate innovative technologies into industry insights and business value for the digital economy.</p><p>Related Resources</p><ul><li><a href="https://www.sas.com/en_us/news/analyst-viewpoints/idc-data-ai-impact-report.html">Data and AI Impact Report: The Trust Imperative</a> (IDC Research)</li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep87/transcript"><strong>here</strong></a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 07 Jan 2026 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/db2a2c95/ad6f5e0f.mp3" length="54761757" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Atar0Vkr7d-5j7fKchhB9CxvVegIZIRd4I4R9dgcYKQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yMDYz/YWQ1MjEwZGEyMDdl/OWIyM2MxZWQxZGYw/ZTgzMS5qcGc.jpg"/>
      <itunes:duration>3420</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/clmarshall1/">Dr. Chris Marshall</a> analyzes AI from all angles including market dynamics, geopolitical concerns, workforce impacts, and what staying the course with agentic AI requires.</p><p>Chris and Kimberly discuss his journey from theoretical physics to analytic philosophy, AI as an economic and geopolitical concern, the rise of sovereign AI, scale economies, market bubbles and expectation gaps, the AI value horizon, why agentic AI is harder than GenAI, calibrating risk and justifying trust, expertise and the workforce, not overlooking Rodney Dangerfield, foundational elements for success, betting on AIOps, and acting in teams.     </p><p><br><a href="https://www.linkedin.com/in/clmarshall1/">Dr. Chris L Marshall</a> is a Vice President at <a href="https://www.idc.com/ap/home/">IDC Asia/Pacific</a> with responsibility for industry insights, data, analytics and AI. A former partner and executive at companies such as IBM, KPMG, Oracle, FIS, and UBS, Chris’s mission is to translate innovative technologies into industry insights and business value for the digital economy.</p><p>Related Resources</p><ul><li><a href="https://www.sas.com/en_us/news/analyst-viewpoints/idc-data-ai-impact-report.html">Data and AI Impact Report: The Trust Imperative</a> (IDC Research)</li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep87/transcript"><strong>here</strong></a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/dr-chris-l-marshall" img="https://img.transistorcdn.com/xv2f1Y7JetTm9XWtTL0-Db8RrrL5fMNioucdS9HE1OM/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83Yzcz/MzZkYzY3ZWYwMWQ4/ZjBmM2QxNGYyYWVi/M2E5NS5qcGVn.jpg">Dr. Chris L. Marshall</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/db2a2c95/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Perspectives and Predictions: Looking Back at 2025 and Forward to 2026</title>
      <itunes:episode>86</itunes:episode>
      <podcast:episode>86</podcast:episode>
      <itunes:title>Perspectives and Predictions: Looking Back at 2025 and Forward to 2026</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9f1991f8-114b-489a-8828-9499c2614dcc</guid>
      <link>https://share.transistor.fm/s/f0821130</link>
      <description>
        <![CDATA[<p>A retrospective sampling of ideas and questions our illustrious guests gifted us in 2025 alongside some glad and not so glad tidings (ok, predictions) for AI in 2026.</p><p>In this episode we revisit insights from our guests and, perhaps, introduce those you may have missed along the way. Select guests provide sparky takes on what may happen in 2026.</p><p>Host Note: I desperately wanted to use the work prognostication in reference to the latter segment. But although the word sounds cool it implies a level of mysticism entirely out of keeping with the informed opinions these guests have proffered. So, predictions it is.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep86/transcript">here</a>.   </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>A retrospective sampling of ideas and questions our illustrious guests gifted us in 2025 alongside some glad and not so glad tidings (ok, predictions) for AI in 2026.</p><p>In this episode we revisit insights from our guests and, perhaps, introduce those you may have missed along the way. Select guests provide sparky takes on what may happen in 2026.</p><p>Host Note: I desperately wanted to use the work prognostication in reference to the latter segment. But although the word sounds cool it implies a level of mysticism entirely out of keeping with the informed opinions these guests have proffered. So, predictions it is.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep86/transcript">here</a>.   </p>]]>
      </content:encoded>
      <pubDate>Wed, 24 Dec 2025 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/f0821130/963925a8.mp3" length="22655072" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/O7A27MJDXqoL7-zwhJFrc1QUUZmuhc0IByioudDvuCA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82MDA2/MTBlNDFmZTg3YWJj/ZDYxYjg3ZTIyMWFk/ZDM2Ni5qcGc.jpg"/>
      <itunes:duration>1414</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>A retrospective sampling of ideas and questions our illustrious guests gifted us in 2025 alongside some glad and not so glad tidings (ok, predictions) for AI in 2026.</p><p>In this episode we revisit insights from our guests and, perhaps, introduce those you may have missed along the way. Select guests provide sparky takes on what may happen in 2026.</p><p>Host Note: I desperately wanted to use the work prognostication in reference to the latter segment. But although the word sounds cool it implies a level of mysticism entirely out of keeping with the informed opinions these guests have proffered. So, predictions it is.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep86/transcript">here</a>.   </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/f0821130/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>An Environmental Grounding with Masheika Allgood</title>
      <itunes:episode>85</itunes:episode>
      <podcast:episode>85</podcast:episode>
      <itunes:title>An Environmental Grounding with Masheika Allgood</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b8fdf485-10e9-4add-b7e5-9ec29169ad97</guid>
      <link>https://share.transistor.fm/s/eb89656f</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/masheika-allgood/">Masheika Allgood</a> delineates good AI from GenAI, outlines the environmental imprint of hyperscale data centers, and emphasizes AI success depends on the why and data. </p><p> </p><p>Masheika and Kimberly discuss her path from law to AI; AI as an embodied infrastructure; forms of beneficial AI; if the GenAI math maths; narratives underpinning AI; the physical imprint of hyperscale data centers; the fallacy of closed loop cooling; who pays for electrical capacity; enabling community dialogue; starting with why in AI product design; AI as a data infrastructure play; staying positive and finding the thing you can do. </p><p> </p><p><a href="https://www.linkedin.com/in/masheika-allgood/">Masheika Allgood</a> is an AI Ethicist and Founder of <a href="https://www.allai-us.com/">AllAI Consulting</a>. She is a well-known advocate for sustainable AI development and contributor to the IEEE <a href="https://standards.ieee.org/ieee/7100/11671/">P7100 Standard</a> for Measurement of Environmental Impacts of Artificial Intelligence Systems. </p><p>Related Resources </p><ul><li><a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.tapsrundry.com%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86YzQxY2YxZmY2ZWIxNjI3OGEyZGE1NmZiZThlYmQ5ZTM6NzphOWEwOjE3MDBmYmY0MzQwYTNlYWM4NzYxZDM1OWYyYzA5YzAwNDc1ZWI2OTk1MTk2OGRkMWUzYmQwYmM5YWMxNDIyNmI6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C4b413f9ddb684a661ea208de11983bb4%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638967542922433853%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=43qnmmCrwl6hr%2B2hTFT0ynA%2BJOQ%2B19IqzkQfhK2QoTs%3D&amp;reserved=0">Taps Run Dry Initiative</a> (Website) </li><li><a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.tapsrundry.com%2Fcivic-participation___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86YzQxY2YxZmY2ZWIxNjI3OGEyZGE1NmZiZThlYmQ5ZTM6NzplOWIzOmIxMTA1YjkyMGUxNjg5NjMzNjgyODBmMTgyYjI5YmQ1N2ZlZmVkNzgyOTk4ZjUxYmRlNjI3Yjg4OTU3NzY2ZDE6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C4b413f9ddb684a661ea208de11983bb4%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638967542922449359%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=nJ%2B6AxT6WqnW%2FbxHNY9bs3nW4J%2FmiE6nSHM5xxYlQHI%3D&amp;reserved=0">Data Center Advocacy Toolkit</a> (Website) </li><li><a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Featyourfrog.substack.com%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86YzQxY2YxZmY2ZWIxNjI3OGEyZGE1NmZiZThlYmQ5ZTM6NzowYjIyOmI2YjZmNzc0MWIzNTA4ZTY3MGY4ZjJkZWFlN2UxMDYzY2RhNTMxNzc3YjZhMzg4YWFkYjU2NmE3OWExYjY1OWY6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C4b413f9ddb684a661ea208de11983bb4%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638967542922479965%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=wMzdNC%2FArwDumt2Iq%2F6%2BQV5VZ8rrpj8oj2kfYYIi7b4%3D&amp;reserved=0">Eat Your Frog</a> (Substack) </li><li><a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.linkedin.com%2Flearning%2Fai-data-governance-compliance-and-auditing-for-developers%2Fyour-developer-guide-to-ai-data-governance___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86YzQxY2YxZmY2ZWIxNjI3OGEyZGE1NmZiZThlYmQ5ZTM6Nzo0YWE3OjY1OTQzZGRlY2ZhMmIyMGQzODk2NWZkZGRiMDI5MGI4MGZlNTcyOGI4ZGQxMDAwNDM3OWZmNTc2MjdlMDI0MDk6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C4b413f9ddb684a661ea208de11983bb4%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638967542922463297%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=QesQP3un%2FHGhUYNm7BDC7Iqx59fNCY9yO9R3MUxVN3o%3D&amp;reserved=0">AI Data Governance, Compliance, and Auditing for Developers</a> (LinkedIn Learning) </li><li><a href="https://www.amazon.com/Mind-Play-Shannon-Invented-Information/dp/1476766681">A Mind at Play: How Claude Shannon Invented the Information Age</a> (Referenced Book) </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep85/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/masheika-allgood/">Masheika Allgood</a> delineates good AI from GenAI, outlines the environmental imprint of hyperscale data centers, and emphasizes AI success depends on the why and data. </p><p> </p><p>Masheika and Kimberly discuss her path from law to AI; AI as an embodied infrastructure; forms of beneficial AI; if the GenAI math maths; narratives underpinning AI; the physical imprint of hyperscale data centers; the fallacy of closed loop cooling; who pays for electrical capacity; enabling community dialogue; starting with why in AI product design; AI as a data infrastructure play; staying positive and finding the thing you can do. </p><p> </p><p><a href="https://www.linkedin.com/in/masheika-allgood/">Masheika Allgood</a> is an AI Ethicist and Founder of <a href="https://www.allai-us.com/">AllAI Consulting</a>. She is a well-known advocate for sustainable AI development and contributor to the IEEE <a href="https://standards.ieee.org/ieee/7100/11671/">P7100 Standard</a> for Measurement of Environmental Impacts of Artificial Intelligence Systems. </p><p>Related Resources </p><ul><li><a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.tapsrundry.com%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86YzQxY2YxZmY2ZWIxNjI3OGEyZGE1NmZiZThlYmQ5ZTM6NzphOWEwOjE3MDBmYmY0MzQwYTNlYWM4NzYxZDM1OWYyYzA5YzAwNDc1ZWI2OTk1MTk2OGRkMWUzYmQwYmM5YWMxNDIyNmI6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C4b413f9ddb684a661ea208de11983bb4%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638967542922433853%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=43qnmmCrwl6hr%2B2hTFT0ynA%2BJOQ%2B19IqzkQfhK2QoTs%3D&amp;reserved=0">Taps Run Dry Initiative</a> (Website) </li><li><a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.tapsrundry.com%2Fcivic-participation___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86YzQxY2YxZmY2ZWIxNjI3OGEyZGE1NmZiZThlYmQ5ZTM6NzplOWIzOmIxMTA1YjkyMGUxNjg5NjMzNjgyODBmMTgyYjI5YmQ1N2ZlZmVkNzgyOTk4ZjUxYmRlNjI3Yjg4OTU3NzY2ZDE6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C4b413f9ddb684a661ea208de11983bb4%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638967542922449359%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=nJ%2B6AxT6WqnW%2FbxHNY9bs3nW4J%2FmiE6nSHM5xxYlQHI%3D&amp;reserved=0">Data Center Advocacy Toolkit</a> (Website) </li><li><a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Featyourfrog.substack.com%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86YzQxY2YxZmY2ZWIxNjI3OGEyZGE1NmZiZThlYmQ5ZTM6NzowYjIyOmI2YjZmNzc0MWIzNTA4ZTY3MGY4ZjJkZWFlN2UxMDYzY2RhNTMxNzc3YjZhMzg4YWFkYjU2NmE3OWExYjY1OWY6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C4b413f9ddb684a661ea208de11983bb4%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638967542922479965%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=wMzdNC%2FArwDumt2Iq%2F6%2BQV5VZ8rrpj8oj2kfYYIi7b4%3D&amp;reserved=0">Eat Your Frog</a> (Substack) </li><li><a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.linkedin.com%2Flearning%2Fai-data-governance-compliance-and-auditing-for-developers%2Fyour-developer-guide-to-ai-data-governance___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86YzQxY2YxZmY2ZWIxNjI3OGEyZGE1NmZiZThlYmQ5ZTM6Nzo0YWE3OjY1OTQzZGRlY2ZhMmIyMGQzODk2NWZkZGRiMDI5MGI4MGZlNTcyOGI4ZGQxMDAwNDM3OWZmNTc2MjdlMDI0MDk6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C4b413f9ddb684a661ea208de11983bb4%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638967542922463297%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=QesQP3un%2FHGhUYNm7BDC7Iqx59fNCY9yO9R3MUxVN3o%3D&amp;reserved=0">AI Data Governance, Compliance, and Auditing for Developers</a> (LinkedIn Learning) </li><li><a href="https://www.amazon.com/Mind-Play-Shannon-Invented-Information/dp/1476766681">A Mind at Play: How Claude Shannon Invented the Information Age</a> (Referenced Book) </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep85/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 10 Dec 2025 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/eb89656f/800f5252.mp3" length="54381870" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/iGAdSrezy37iZdKWsSO7sA0tamdGMnHTaysHQvPFB6E/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xN2Ux/YzgyZWVhZTFmYzQ4/YWVlODdmODA3OGIw/MGI3YS5qcGc.jpg"/>
      <itunes:duration>3397</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/masheika-allgood/">Masheika Allgood</a> delineates good AI from GenAI, outlines the environmental imprint of hyperscale data centers, and emphasizes AI success depends on the why and data. </p><p> </p><p>Masheika and Kimberly discuss her path from law to AI; AI as an embodied infrastructure; forms of beneficial AI; if the GenAI math maths; narratives underpinning AI; the physical imprint of hyperscale data centers; the fallacy of closed loop cooling; who pays for electrical capacity; enabling community dialogue; starting with why in AI product design; AI as a data infrastructure play; staying positive and finding the thing you can do. </p><p> </p><p><a href="https://www.linkedin.com/in/masheika-allgood/">Masheika Allgood</a> is an AI Ethicist and Founder of <a href="https://www.allai-us.com/">AllAI Consulting</a>. She is a well-known advocate for sustainable AI development and contributor to the IEEE <a href="https://standards.ieee.org/ieee/7100/11671/">P7100 Standard</a> for Measurement of Environmental Impacts of Artificial Intelligence Systems. </p><p>Related Resources </p><ul><li><a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.tapsrundry.com%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86YzQxY2YxZmY2ZWIxNjI3OGEyZGE1NmZiZThlYmQ5ZTM6NzphOWEwOjE3MDBmYmY0MzQwYTNlYWM4NzYxZDM1OWYyYzA5YzAwNDc1ZWI2OTk1MTk2OGRkMWUzYmQwYmM5YWMxNDIyNmI6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C4b413f9ddb684a661ea208de11983bb4%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638967542922433853%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=43qnmmCrwl6hr%2B2hTFT0ynA%2BJOQ%2B19IqzkQfhK2QoTs%3D&amp;reserved=0">Taps Run Dry Initiative</a> (Website) </li><li><a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.tapsrundry.com%2Fcivic-participation___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86YzQxY2YxZmY2ZWIxNjI3OGEyZGE1NmZiZThlYmQ5ZTM6NzplOWIzOmIxMTA1YjkyMGUxNjg5NjMzNjgyODBmMTgyYjI5YmQ1N2ZlZmVkNzgyOTk4ZjUxYmRlNjI3Yjg4OTU3NzY2ZDE6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C4b413f9ddb684a661ea208de11983bb4%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638967542922449359%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=nJ%2B6AxT6WqnW%2FbxHNY9bs3nW4J%2FmiE6nSHM5xxYlQHI%3D&amp;reserved=0">Data Center Advocacy Toolkit</a> (Website) </li><li><a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Featyourfrog.substack.com%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86YzQxY2YxZmY2ZWIxNjI3OGEyZGE1NmZiZThlYmQ5ZTM6NzowYjIyOmI2YjZmNzc0MWIzNTA4ZTY3MGY4ZjJkZWFlN2UxMDYzY2RhNTMxNzc3YjZhMzg4YWFkYjU2NmE3OWExYjY1OWY6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C4b413f9ddb684a661ea208de11983bb4%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638967542922479965%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=wMzdNC%2FArwDumt2Iq%2F6%2BQV5VZ8rrpj8oj2kfYYIi7b4%3D&amp;reserved=0">Eat Your Frog</a> (Substack) </li><li><a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.linkedin.com%2Flearning%2Fai-data-governance-compliance-and-auditing-for-developers%2Fyour-developer-guide-to-ai-data-governance___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86YzQxY2YxZmY2ZWIxNjI3OGEyZGE1NmZiZThlYmQ5ZTM6Nzo0YWE3OjY1OTQzZGRlY2ZhMmIyMGQzODk2NWZkZGRiMDI5MGI4MGZlNTcyOGI4ZGQxMDAwNDM3OWZmNTc2MjdlMDI0MDk6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C4b413f9ddb684a661ea208de11983bb4%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638967542922463297%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=QesQP3un%2FHGhUYNm7BDC7Iqx59fNCY9yO9R3MUxVN3o%3D&amp;reserved=0">AI Data Governance, Compliance, and Auditing for Developers</a> (LinkedIn Learning) </li><li><a href="https://www.amazon.com/Mind-Play-Shannon-Invented-Information/dp/1476766681">A Mind at Play: How Claude Shannon Invented the Information Age</a> (Referenced Book) </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep85/transcript">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/eb89656f/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Your Digital Twin Is Not You with Kati Walcott</title>
      <itunes:episode>84</itunes:episode>
      <podcast:episode>84</podcast:episode>
      <itunes:title>Your Digital Twin Is Not You with Kati Walcott</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1db7d882-339e-40d6-9e51-9cfd30ea5be5</guid>
      <link>https://share.transistor.fm/s/1b20e1e9</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/katalinbartfaiwalcott/">Kati Walcott</a> differentiates simulated will from genuine intent, data sharing from data surrender, and agents from agency in a quest to ensure digital sovereignty for all.</p><p>Kati and Kimberly discuss her journey from molecular genetics to AI engineering; the evolution of an intention economy built on simulated will; the provider ecosystem and monetization as a motive; capturing genuine intent; non-benign aspects of personalization; how a single bad data point can be a health hazard; the 3 styles of digital data; data sharing vs. data surrender; whether digital society represents reality; restoring authorship over our digital selves; pivoting from convenience to governance; why AI is only accountable when your will is enforced; and the urgent need to disrupt feudal economics in AI.<br> </p><p><a href="https://www.linkedin.com/in/katalinbartfaiwalcott/">Kati Walcott</a> is the Founder and Chief Technology Officer at <a href="https://synovient.com/">Synovient</a>. With over 120 international patents, Kati is a visionary tech inventor, author and leader focused on digital representation, rights and citizenship in the Digital Data Economy.</p><p>Related Resources</p><ul><li><a href="https://www.linkedin.com/pulse/false-intention-economy-how-ai-systems-replacing-b%C3%A1rtfai-walcott-4vxuc/">The False Intention Economy: How AI Systems are Replacing Human Will with Modeled Behavior</a> (LinkedIn Article)</li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep84/transcript">here</a>.   </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/katalinbartfaiwalcott/">Kati Walcott</a> differentiates simulated will from genuine intent, data sharing from data surrender, and agents from agency in a quest to ensure digital sovereignty for all.</p><p>Kati and Kimberly discuss her journey from molecular genetics to AI engineering; the evolution of an intention economy built on simulated will; the provider ecosystem and monetization as a motive; capturing genuine intent; non-benign aspects of personalization; how a single bad data point can be a health hazard; the 3 styles of digital data; data sharing vs. data surrender; whether digital society represents reality; restoring authorship over our digital selves; pivoting from convenience to governance; why AI is only accountable when your will is enforced; and the urgent need to disrupt feudal economics in AI.<br> </p><p><a href="https://www.linkedin.com/in/katalinbartfaiwalcott/">Kati Walcott</a> is the Founder and Chief Technology Officer at <a href="https://synovient.com/">Synovient</a>. With over 120 international patents, Kati is a visionary tech inventor, author and leader focused on digital representation, rights and citizenship in the Digital Data Economy.</p><p>Related Resources</p><ul><li><a href="https://www.linkedin.com/pulse/false-intention-economy-how-ai-systems-replacing-b%C3%A1rtfai-walcott-4vxuc/">The False Intention Economy: How AI Systems are Replacing Human Will with Modeled Behavior</a> (LinkedIn Article)</li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep84/transcript">here</a>.   </p>]]>
      </content:encoded>
      <pubDate>Wed, 26 Nov 2025 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/1b20e1e9/e684c787.mp3" length="50554399" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/6KFt9miWNPhEoZFB_78zv_ehuBWwZkHDZMV4pVm3G-Q/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wYTNl/MjNlNzcwMTk4ZTY2/YWJjN2ExNWRlOGY1/MmRhNi5qcGc.jpg"/>
      <itunes:duration>3157</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/katalinbartfaiwalcott/">Kati Walcott</a> differentiates simulated will from genuine intent, data sharing from data surrender, and agents from agency in a quest to ensure digital sovereignty for all.</p><p>Kati and Kimberly discuss her journey from molecular genetics to AI engineering; the evolution of an intention economy built on simulated will; the provider ecosystem and monetization as a motive; capturing genuine intent; non-benign aspects of personalization; how a single bad data point can be a health hazard; the 3 styles of digital data; data sharing vs. data surrender; whether digital society represents reality; restoring authorship over our digital selves; pivoting from convenience to governance; why AI is only accountable when your will is enforced; and the urgent need to disrupt feudal economics in AI.<br> </p><p><a href="https://www.linkedin.com/in/katalinbartfaiwalcott/">Kati Walcott</a> is the Founder and Chief Technology Officer at <a href="https://synovient.com/">Synovient</a>. With over 120 international patents, Kati is a visionary tech inventor, author and leader focused on digital representation, rights and citizenship in the Digital Data Economy.</p><p>Related Resources</p><ul><li><a href="https://www.linkedin.com/pulse/false-intention-economy-how-ai-systems-replacing-b%C3%A1rtfai-walcott-4vxuc/">The False Intention Economy: How AI Systems are Replacing Human Will with Modeled Behavior</a> (LinkedIn Article)</li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep84/transcript">here</a>.   </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/kati-walcott" img="https://img.transistorcdn.com/hi0NyfPxmabXcvSBsXzWSSuJloVjvaZ5y2NY-Iv7Kyg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xNjBk/YmQ0MDUwYWIzYjFi/NDkxNTkxZmI5MjFk/MDU4My53ZWJw.jpg">Kati Walcott</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/1b20e1e9/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>No Community Left Behind with Paula Helm</title>
      <itunes:episode>83</itunes:episode>
      <podcast:episode>83</podcast:episode>
      <itunes:title>No Community Left Behind with Paula Helm</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9ebce5fc-9c17-4b67-97b9-3d58b094cc98</guid>
      <link>https://share.transistor.fm/s/13b68484</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/paula-helm-775383270/">Paula Helm</a> articulates an AI vision that goes beyond base performance to include epistemic justice and cultural diversity by focusing on speakers and not language alone.</p><p> </p><p>Paula and Kimberly discuss ethics as a science; language as a core element of culture; going beyond superficial diversity; epistemic justice and valuing other’s knowledge; the translation fallacy; indigenous languages as oral goods; centering speakers and communities; linguistic autonomy and economic participation; the Māori view on data ownership; the role of data subjects; enabling cultural understanding, self-determination and expression; the limits of synthetic data; ethical issues as power asymmetries; and reflecting on what AI mirrors back to us. </p><p> </p><p><a href="https://www.linkedin.com/in/paula-helm-775383270/">Paula Helm</a> is an Assistant Professor of Empirical Ethics and Data Science at the University of Amsterdam. Her work sits at the intersection of STS, Media Studies and Ethics. In 2022 Paula was recognized as one of the 100 Most Brilliant Women in AI-Ethics.</p><p>Related Resources</p><ul><li>Generating Reality and Silencing Debate: Synthetic Data as Discursive Device (paper) <a href="https://journals.sagepub.com/doi/full/10.1177/20539517241249447">https://journals.sagepub.com/doi/full/10.1177/20539517241249447</a></li><li>Diversity and Language Technology (paper): <a href="https://link.springer.com/article/10.1007/s10676-023-09742-6">https://link.springer.com/article/10.1007/s10676-023-09742-6</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep83/transcript">here</a>.   </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/paula-helm-775383270/">Paula Helm</a> articulates an AI vision that goes beyond base performance to include epistemic justice and cultural diversity by focusing on speakers and not language alone.</p><p> </p><p>Paula and Kimberly discuss ethics as a science; language as a core element of culture; going beyond superficial diversity; epistemic justice and valuing other’s knowledge; the translation fallacy; indigenous languages as oral goods; centering speakers and communities; linguistic autonomy and economic participation; the Māori view on data ownership; the role of data subjects; enabling cultural understanding, self-determination and expression; the limits of synthetic data; ethical issues as power asymmetries; and reflecting on what AI mirrors back to us. </p><p> </p><p><a href="https://www.linkedin.com/in/paula-helm-775383270/">Paula Helm</a> is an Assistant Professor of Empirical Ethics and Data Science at the University of Amsterdam. Her work sits at the intersection of STS, Media Studies and Ethics. In 2022 Paula was recognized as one of the 100 Most Brilliant Women in AI-Ethics.</p><p>Related Resources</p><ul><li>Generating Reality and Silencing Debate: Synthetic Data as Discursive Device (paper) <a href="https://journals.sagepub.com/doi/full/10.1177/20539517241249447">https://journals.sagepub.com/doi/full/10.1177/20539517241249447</a></li><li>Diversity and Language Technology (paper): <a href="https://link.springer.com/article/10.1007/s10676-023-09742-6">https://link.springer.com/article/10.1007/s10676-023-09742-6</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep83/transcript">here</a>.   </p>]]>
      </content:encoded>
      <pubDate>Wed, 12 Nov 2025 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/13b68484/0e246039.mp3" length="49422331" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/y_6ERiOuDQnsuMf79TmltZwZn7h58YMuCQUejVzs1LE/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jYzhh/M2Y3YWQ2NDBjN2Yy/OTMxZWY4OTYzZTNk/YjZkMS5qcGc.jpg"/>
      <itunes:duration>3086</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/paula-helm-775383270/">Paula Helm</a> articulates an AI vision that goes beyond base performance to include epistemic justice and cultural diversity by focusing on speakers and not language alone.</p><p> </p><p>Paula and Kimberly discuss ethics as a science; language as a core element of culture; going beyond superficial diversity; epistemic justice and valuing other’s knowledge; the translation fallacy; indigenous languages as oral goods; centering speakers and communities; linguistic autonomy and economic participation; the Māori view on data ownership; the role of data subjects; enabling cultural understanding, self-determination and expression; the limits of synthetic data; ethical issues as power asymmetries; and reflecting on what AI mirrors back to us. </p><p> </p><p><a href="https://www.linkedin.com/in/paula-helm-775383270/">Paula Helm</a> is an Assistant Professor of Empirical Ethics and Data Science at the University of Amsterdam. Her work sits at the intersection of STS, Media Studies and Ethics. In 2022 Paula was recognized as one of the 100 Most Brilliant Women in AI-Ethics.</p><p>Related Resources</p><ul><li>Generating Reality and Silencing Debate: Synthetic Data as Discursive Device (paper) <a href="https://journals.sagepub.com/doi/full/10.1177/20539517241249447">https://journals.sagepub.com/doi/full/10.1177/20539517241249447</a></li><li>Diversity and Language Technology (paper): <a href="https://link.springer.com/article/10.1007/s10676-023-09742-6">https://link.springer.com/article/10.1007/s10676-023-09742-6</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep83/transcript">here</a>.   </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/paula-helm" img="https://img.transistorcdn.com/6ClZknuI-KqJe2f0Xi11KEnDV0Mz4lKQX0N56jWZq-M/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iOWVm/OTI5MzU5MmFjNjY0/MjYzNzhmNzMwNDk1/MTQ0MS5qcGc.jpg">Paula Helm</podcast:person>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/13b68484/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>What AI Values with Jordan Loewen-Colón</title>
      <itunes:episode>82</itunes:episode>
      <podcast:episode>82</podcast:episode>
      <itunes:title>What AI Values with Jordan Loewen-Colón</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2697db92-900d-4c48-89f7-9da004517ede</guid>
      <link>https://share.transistor.fm/s/9c6d010a</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/jordanloewencolon/">Jordan Loewen-Colón</a> values clarity regarding the practical impacts, philosophical implications and work required for AI to serve the public good, not just private gain.</p><p>Jordan and Kimberly discuss value alignment as an engineering or social problem; understanding ourselves as data personas; the limits of personalization; the perception of agency; how AI shapes our language and desires; flattening of culture and personality; localized models and vernacularization; what LLMs value (so to speak); how tools from calculators to LLMs embody values; whether AI accountability is on anyone’s radar; failures of policy and regulation; positive signals; getting educated and fostering the best AI has to offer.</p><p><a href="https://www.linkedin.com/in/jordanloewencolon/">Jordan Loewen-Colón</a> is an Adjunct Associate Professor of AI Ethics and Policy at Smith School of Business | Queen's University. He is also the Co-Founder of the <a href="https://aialtlab.org/">AI Alt Lab</a> which is dedicated to ensuring AI serves the public good and not just private gain.</p><p>Related Resources</p><ul><li>HBR Research: Do LLMs Have Values? (paper): <a href="https://hbr.org/2025/05/research-do-llms-have-values">https://hbr.org/2025/05/research-do-llms-have-values</a>  </li><li>AI4HF Beyond Surface Collaboration: How AI Enables High-Performing Teams (paper): <a href="https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication">https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication</a> </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep82/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/jordanloewencolon/">Jordan Loewen-Colón</a> values clarity regarding the practical impacts, philosophical implications and work required for AI to serve the public good, not just private gain.</p><p>Jordan and Kimberly discuss value alignment as an engineering or social problem; understanding ourselves as data personas; the limits of personalization; the perception of agency; how AI shapes our language and desires; flattening of culture and personality; localized models and vernacularization; what LLMs value (so to speak); how tools from calculators to LLMs embody values; whether AI accountability is on anyone’s radar; failures of policy and regulation; positive signals; getting educated and fostering the best AI has to offer.</p><p><a href="https://www.linkedin.com/in/jordanloewencolon/">Jordan Loewen-Colón</a> is an Adjunct Associate Professor of AI Ethics and Policy at Smith School of Business | Queen's University. He is also the Co-Founder of the <a href="https://aialtlab.org/">AI Alt Lab</a> which is dedicated to ensuring AI serves the public good and not just private gain.</p><p>Related Resources</p><ul><li>HBR Research: Do LLMs Have Values? (paper): <a href="https://hbr.org/2025/05/research-do-llms-have-values">https://hbr.org/2025/05/research-do-llms-have-values</a>  </li><li>AI4HF Beyond Surface Collaboration: How AI Enables High-Performing Teams (paper): <a href="https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication">https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication</a> </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep82/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 29 Oct 2025 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/9c6d010a/6b861b6e.mp3" length="49006532" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/hZMxkLnSYCU0mZtNT997c_ZJTbqcn1fXVfkn870YOP8/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mMGZj/NGRiZmFjYTYxMjIw/NmZjN2E2MmIxNTIw/NTlhZS5qcGc.jpg"/>
      <itunes:duration>3061</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/jordanloewencolon/">Jordan Loewen-Colón</a> values clarity regarding the practical impacts, philosophical implications and work required for AI to serve the public good, not just private gain.</p><p>Jordan and Kimberly discuss value alignment as an engineering or social problem; understanding ourselves as data personas; the limits of personalization; the perception of agency; how AI shapes our language and desires; flattening of culture and personality; localized models and vernacularization; what LLMs value (so to speak); how tools from calculators to LLMs embody values; whether AI accountability is on anyone’s radar; failures of policy and regulation; positive signals; getting educated and fostering the best AI has to offer.</p><p><a href="https://www.linkedin.com/in/jordanloewencolon/">Jordan Loewen-Colón</a> is an Adjunct Associate Professor of AI Ethics and Policy at Smith School of Business | Queen's University. He is also the Co-Founder of the <a href="https://aialtlab.org/">AI Alt Lab</a> which is dedicated to ensuring AI serves the public good and not just private gain.</p><p>Related Resources</p><ul><li>HBR Research: Do LLMs Have Values? (paper): <a href="https://hbr.org/2025/05/research-do-llms-have-values">https://hbr.org/2025/05/research-do-llms-have-values</a>  </li><li>AI4HF Beyond Surface Collaboration: How AI Enables High-Performing Teams (paper): <a href="https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication">https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication</a> </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep82/transcript">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/jordan-loewen-colon" img="https://img.transistorcdn.com/sMS7L3p8frPNJ-eSjkk-wQN6Glkst00Q-EBm0dofke8/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xZTE3/YjAyZTRjNTU1MjIw/MTQxMDI0YmE2MjEx/YzRhZC5qcGc.jpg">Jordan Loewen-Colón</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/9c6d010a/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Agentic Insecurities with Keren Katz</title>
      <itunes:episode>81</itunes:episode>
      <podcast:episode>81</podcast:episode>
      <itunes:title>Agentic Insecurities with Keren Katz</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">abfca592-6cf2-47e5-a0b8-802d8ab3c6a2</guid>
      <link>https://share.transistor.fm/s/cb0e1552</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/keren-katz-ba3041189/">Keren Katz</a> exposes novel risks posed by GenAI and agentic AI while reflecting on unintended malfeasance, surprisingly common insider threats and weak security postures. </p><p><br>Keren and Kimberly discuss threats amplified by agentic AI; self-inflicted exposures observed in Fortune 500 companies; normalizing risky behavior; unintentional threats; non-determinism as a risk; users as an attack vector; the OWASP State of Agentic AI and Governance report; ransomware 2025; mapping use cases and user intent; preemptive security postures; agentic behavior analysis; proactive AI/agentic security policies and incident response plans.</p><p> </p><p><a href="https://www.linkedin.com/in/keren-katz-ba3041189/">Keren Katz</a> is Senior Group Manager of Threat Research, Product Management and AI at Tenable, a contributor at both the Open Worldwide Application Security Project (<a href="https://owasp.org/">OWASP</a>) and Forbes. Keren is a global leader in AI and cybersecurity, specializing in Generative AI threat detection.<br> </p><p>Related Resources</p><ul><li>Article: <a href="https://www.forbes.com/councils/forbestechcouncil/2025/07/09/the-silent-breach-why-agentic-ai-demands-new-oversight/">The Silent Breach: Why Agentic AI Demands New Oversight</a></li><li>State of Agentic AI Security and Governance (whitepaper): <a href="https://genai.owasp.org/resource/state-of-agentic-ai-security-and-governance-1-0/">https://genai.owasp.org/resource/state-of-agentic-ai-security-and-governance-1-0/</a> </li><li>The LLM Top 10: <a href="https://genai.owasp.org/llm-top-10/">https://genai.owasp.org/llm-top-10/</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep81/transcript">here</a>.   </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/keren-katz-ba3041189/">Keren Katz</a> exposes novel risks posed by GenAI and agentic AI while reflecting on unintended malfeasance, surprisingly common insider threats and weak security postures. </p><p><br>Keren and Kimberly discuss threats amplified by agentic AI; self-inflicted exposures observed in Fortune 500 companies; normalizing risky behavior; unintentional threats; non-determinism as a risk; users as an attack vector; the OWASP State of Agentic AI and Governance report; ransomware 2025; mapping use cases and user intent; preemptive security postures; agentic behavior analysis; proactive AI/agentic security policies and incident response plans.</p><p> </p><p><a href="https://www.linkedin.com/in/keren-katz-ba3041189/">Keren Katz</a> is Senior Group Manager of Threat Research, Product Management and AI at Tenable, a contributor at both the Open Worldwide Application Security Project (<a href="https://owasp.org/">OWASP</a>) and Forbes. Keren is a global leader in AI and cybersecurity, specializing in Generative AI threat detection.<br> </p><p>Related Resources</p><ul><li>Article: <a href="https://www.forbes.com/councils/forbestechcouncil/2025/07/09/the-silent-breach-why-agentic-ai-demands-new-oversight/">The Silent Breach: Why Agentic AI Demands New Oversight</a></li><li>State of Agentic AI Security and Governance (whitepaper): <a href="https://genai.owasp.org/resource/state-of-agentic-ai-security-and-governance-1-0/">https://genai.owasp.org/resource/state-of-agentic-ai-security-and-governance-1-0/</a> </li><li>The LLM Top 10: <a href="https://genai.owasp.org/llm-top-10/">https://genai.owasp.org/llm-top-10/</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep81/transcript">here</a>.   </p>]]>
      </content:encoded>
      <pubDate>Wed, 15 Oct 2025 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/cb0e1552/8d3d70ea.mp3" length="46736374" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/bA2Qnft3aBBsJNKutA4bPUJdQijvEikGfjj2kH50O9U/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zMjlk/YmQ3MjBjMjNmN2U0/ZDExOTg5YTFlNGIw/OGM5OS5qcGc.jpg"/>
      <itunes:duration>2919</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/keren-katz-ba3041189/">Keren Katz</a> exposes novel risks posed by GenAI and agentic AI while reflecting on unintended malfeasance, surprisingly common insider threats and weak security postures. </p><p><br>Keren and Kimberly discuss threats amplified by agentic AI; self-inflicted exposures observed in Fortune 500 companies; normalizing risky behavior; unintentional threats; non-determinism as a risk; users as an attack vector; the OWASP State of Agentic AI and Governance report; ransomware 2025; mapping use cases and user intent; preemptive security postures; agentic behavior analysis; proactive AI/agentic security policies and incident response plans.</p><p> </p><p><a href="https://www.linkedin.com/in/keren-katz-ba3041189/">Keren Katz</a> is Senior Group Manager of Threat Research, Product Management and AI at Tenable, a contributor at both the Open Worldwide Application Security Project (<a href="https://owasp.org/">OWASP</a>) and Forbes. Keren is a global leader in AI and cybersecurity, specializing in Generative AI threat detection.<br> </p><p>Related Resources</p><ul><li>Article: <a href="https://www.forbes.com/councils/forbestechcouncil/2025/07/09/the-silent-breach-why-agentic-ai-demands-new-oversight/">The Silent Breach: Why Agentic AI Demands New Oversight</a></li><li>State of Agentic AI Security and Governance (whitepaper): <a href="https://genai.owasp.org/resource/state-of-agentic-ai-security-and-governance-1-0/">https://genai.owasp.org/resource/state-of-agentic-ai-security-and-governance-1-0/</a> </li><li>The LLM Top 10: <a href="https://genai.owasp.org/llm-top-10/">https://genai.owasp.org/llm-top-10/</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep81/transcript">here</a>.   </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/keren-katz" img="https://img.transistorcdn.com/znW4DTALNH6o925UjiZJN266cV1ZOw-M-LwqpC07ZbA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNjcy/NTAwNGM0MDA0ZmJk/Y2IxOGRiMGY1NzU4/YTg0My5qcGc.jpg">Keren Katz</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/cb0e1552/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>To Be or Not to Be Agentic with Maximilian Vogel</title>
      <itunes:episode>80</itunes:episode>
      <podcast:episode>80</podcast:episode>
      <itunes:title>To Be or Not to Be Agentic with Maximilian Vogel</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1bbda2ef-037f-400c-890f-1d404ae7a486</guid>
      <link>https://share.transistor.fm/s/67d78f31</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/maximilian-vogel-0539427/">Maximilian Vogel</a> dismisses tales of agentic unicorns, relying instead on human expertise, rational objectives, and rigorous design to deploy enterprise agentic systems.   </p><p><br>Maximilian and Kimberly discuss what an agentic system is (emphasis on system); why agency in agentic AI resides with humans; engineering agentic workflows; agentic AI as a mule not a unicorn; establishing confidence and accuracy; codesigning with business/domain experts; why 100% of anything is not the goal; focusing on KPIs not features; tricks to keep models from getting tricked; modeling agentic workflows on human work; live data and human-in-the-loop validation; AI agents as a support team and implications for human work. </p><p> </p><p><a href="https://www.linkedin.com/in/maximilian-vogel-0539427/">Maximilian Vogel</a> is the Co-Founder of <a href="http://big-picture.com">BIG PICTURE</a>, a digital transformation boutique specializing in the use of AI for business innovation. Maximilian enables the strategic deployment of safe, secure, and reliable agentic AI systems.</p><p><br>Related Resources</p><ul><li>Medium: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fmedium.com%2F_%40maximilian.vogel___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86MzE2MDFmZTY5MDk2ZGI4NjljYzllNjM3OGI3MzZiMjY6NzpkOTU4Ojc1NzY4MTQ3OTRmZmNlNGI3N2VlY2EwYmRhMDI0NzI2OGEzOTA3M2FjZTNlMjg3ODZiOWVlY2UzNTUxZmMxNzg6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cd1478bf059d74c010f5408ddf13e5e42%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638931972660184956%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=ayq3XFc7cGgVykXLc6JgV3tZUdquA3ztf0qNTdHrf9g%3D&amp;reserved=0">https://medium.com/@maximilian.vogel</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep80/transcript">here</a>.   </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/maximilian-vogel-0539427/">Maximilian Vogel</a> dismisses tales of agentic unicorns, relying instead on human expertise, rational objectives, and rigorous design to deploy enterprise agentic systems.   </p><p><br>Maximilian and Kimberly discuss what an agentic system is (emphasis on system); why agency in agentic AI resides with humans; engineering agentic workflows; agentic AI as a mule not a unicorn; establishing confidence and accuracy; codesigning with business/domain experts; why 100% of anything is not the goal; focusing on KPIs not features; tricks to keep models from getting tricked; modeling agentic workflows on human work; live data and human-in-the-loop validation; AI agents as a support team and implications for human work. </p><p> </p><p><a href="https://www.linkedin.com/in/maximilian-vogel-0539427/">Maximilian Vogel</a> is the Co-Founder of <a href="http://big-picture.com">BIG PICTURE</a>, a digital transformation boutique specializing in the use of AI for business innovation. Maximilian enables the strategic deployment of safe, secure, and reliable agentic AI systems.</p><p><br>Related Resources</p><ul><li>Medium: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fmedium.com%2F_%40maximilian.vogel___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86MzE2MDFmZTY5MDk2ZGI4NjljYzllNjM3OGI3MzZiMjY6NzpkOTU4Ojc1NzY4MTQ3OTRmZmNlNGI3N2VlY2EwYmRhMDI0NzI2OGEzOTA3M2FjZTNlMjg3ODZiOWVlY2UzNTUxZmMxNzg6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cd1478bf059d74c010f5408ddf13e5e42%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638931972660184956%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=ayq3XFc7cGgVykXLc6JgV3tZUdquA3ztf0qNTdHrf9g%3D&amp;reserved=0">https://medium.com/@maximilian.vogel</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep80/transcript">here</a>.   </p>]]>
      </content:encoded>
      <pubDate>Wed, 01 Oct 2025 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/67d78f31/f0c8afb1.mp3" length="48653368" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/2kuygeLAkd3kpPcL2Shkw9cBSe9y3ge_jgHlYTizvAE/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80YzAx/ZmQ2MGI2OTMyOWJm/ODhhZThmNzA0OGY4/NTRmOC5qcGc.jpg"/>
      <itunes:duration>3039</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/maximilian-vogel-0539427/">Maximilian Vogel</a> dismisses tales of agentic unicorns, relying instead on human expertise, rational objectives, and rigorous design to deploy enterprise agentic systems.   </p><p><br>Maximilian and Kimberly discuss what an agentic system is (emphasis on system); why agency in agentic AI resides with humans; engineering agentic workflows; agentic AI as a mule not a unicorn; establishing confidence and accuracy; codesigning with business/domain experts; why 100% of anything is not the goal; focusing on KPIs not features; tricks to keep models from getting tricked; modeling agentic workflows on human work; live data and human-in-the-loop validation; AI agents as a support team and implications for human work. </p><p> </p><p><a href="https://www.linkedin.com/in/maximilian-vogel-0539427/">Maximilian Vogel</a> is the Co-Founder of <a href="http://big-picture.com">BIG PICTURE</a>, a digital transformation boutique specializing in the use of AI for business innovation. Maximilian enables the strategic deployment of safe, secure, and reliable agentic AI systems.</p><p><br>Related Resources</p><ul><li>Medium: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fmedium.com%2F_%40maximilian.vogel___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86MzE2MDFmZTY5MDk2ZGI4NjljYzllNjM3OGI3MzZiMjY6NzpkOTU4Ojc1NzY4MTQ3OTRmZmNlNGI3N2VlY2EwYmRhMDI0NzI2OGEzOTA3M2FjZTNlMjg3ODZiOWVlY2UzNTUxZmMxNzg6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cd1478bf059d74c010f5408ddf13e5e42%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638931972660184956%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=ayq3XFc7cGgVykXLc6JgV3tZUdquA3ztf0qNTdHrf9g%3D&amp;reserved=0">https://medium.com/@maximilian.vogel</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep80/transcript">here</a>.   </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/maximilian-vogel" img="https://img.transistorcdn.com/cTaVQ9PjfOsKjIXeEcPtUgWRPHfxdP3sE54BAo4B9NY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTM2/OGY3YWFhZTc4ZTdh/N2JkZjY0NjIzMGIw/NGEwNC5wbmc.jpg">Maximilian Vogel</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/67d78f31/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Problem of Democracy with Henrik Skaug Sætra</title>
      <itunes:episode>79</itunes:episode>
      <podcast:episode>79</podcast:episode>
      <itunes:title>The Problem of Democracy with Henrik Skaug Sætra</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fcb663d3-dbdf-4dfb-a4fd-5f054144182e</guid>
      <link>https://share.transistor.fm/s/98d209df</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/henriksaetra/">Henrik Skaug Sætra</a> considers the basis of democracy, the nature of politics, the tilt toward digital sovereignty and what role AI plays in our collective human society. </p><p><br>Henrik and Kimberly discuss AI’s impact on human comprehension and communication; core democratic competencies at risk; politics as a joint human endeavor; conflating citizens with customers; productively messy processes; the problem of democracy; how AI could change what democracy means; whether democracy is computable; Google’s experiments in democratic AI; AI and digital sovereignty; and a multidisciplinary path forward.  </p><p> </p><p><a href="https://www.linkedin.com/in/henriksaetra/">Henrik Skaug Sætra</a> is an Associate Professor of Sustainable Digitalisation and Head of the <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.mn.uio.no%2Fifi%2Fenglish%2Fresearch%2Fgroups%2Frt%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NjdkZDA3ZWM4NmRmNTcyOWRjMmQ4NjhkOGZjNGZjZmQ6Nzo2YTEzOmQwM2U0YmE5ZjFjNTc2MzQ1OGI3YzVlMmM1NmJmMmM3OWUxYmQzNjIzODhiZDliNmUzOTE5OTc1OTRmMjA5MDQ6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cb4e50bc409b14b472f0708dda36f720e%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638846421684261126%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=pNi%2BPL66nVMdI%2BEGq3hZHYzj9IqM6ZPRCHoKuxAPIMk%3D&amp;reserved=0">Technology and Sustainable Futures</a> research group at Oslo University. He is also the CEO of <a href="https://www.pathwais.eu/">Pathwais.eu</a> connecting strategy, uncertainty, and action through scenario-based risk management.</p><p><br>Related Resources</p><ul><li>Google Scholar Profile: <a href="https://scholar.google.com/citations?user=pvgdIpUAAAAJ&amp;hl=en">https://scholar.google.com/citations?user=pvgdIpUAAAAJ&amp;hl=en</a></li><li>How to Save Democracy from AI (Book – Norwegian): <a href="https://www.norli.no/9788202853686">https://www.norli.no/9788202853686</a></li><li>AI for the Sustainable Development Goals (Book): <a href="https://www.amazon.com/AI-Sustainable-Development-Goals-Everything/dp/1032044063">https://www.amazon.com/AI-Sustainable-Development-Goals-Everything/dp/1032044063</a></li><li>Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism (Book): <a href="https://www.amazon.com/Technology-Sustainable-Development-Pitfalls-Techno-Solutionism-ebook/dp/B0C17RBTVL">https://www.amazon.com/Technology-Sustainable-Development-Pitfalls-Techno-Solutionism-ebook/dp/B0C17RBTVL</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep79/transcript">here</a>.   </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/henriksaetra/">Henrik Skaug Sætra</a> considers the basis of democracy, the nature of politics, the tilt toward digital sovereignty and what role AI plays in our collective human society. </p><p><br>Henrik and Kimberly discuss AI’s impact on human comprehension and communication; core democratic competencies at risk; politics as a joint human endeavor; conflating citizens with customers; productively messy processes; the problem of democracy; how AI could change what democracy means; whether democracy is computable; Google’s experiments in democratic AI; AI and digital sovereignty; and a multidisciplinary path forward.  </p><p> </p><p><a href="https://www.linkedin.com/in/henriksaetra/">Henrik Skaug Sætra</a> is an Associate Professor of Sustainable Digitalisation and Head of the <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.mn.uio.no%2Fifi%2Fenglish%2Fresearch%2Fgroups%2Frt%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NjdkZDA3ZWM4NmRmNTcyOWRjMmQ4NjhkOGZjNGZjZmQ6Nzo2YTEzOmQwM2U0YmE5ZjFjNTc2MzQ1OGI3YzVlMmM1NmJmMmM3OWUxYmQzNjIzODhiZDliNmUzOTE5OTc1OTRmMjA5MDQ6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cb4e50bc409b14b472f0708dda36f720e%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638846421684261126%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=pNi%2BPL66nVMdI%2BEGq3hZHYzj9IqM6ZPRCHoKuxAPIMk%3D&amp;reserved=0">Technology and Sustainable Futures</a> research group at Oslo University. He is also the CEO of <a href="https://www.pathwais.eu/">Pathwais.eu</a> connecting strategy, uncertainty, and action through scenario-based risk management.</p><p><br>Related Resources</p><ul><li>Google Scholar Profile: <a href="https://scholar.google.com/citations?user=pvgdIpUAAAAJ&amp;hl=en">https://scholar.google.com/citations?user=pvgdIpUAAAAJ&amp;hl=en</a></li><li>How to Save Democracy from AI (Book – Norwegian): <a href="https://www.norli.no/9788202853686">https://www.norli.no/9788202853686</a></li><li>AI for the Sustainable Development Goals (Book): <a href="https://www.amazon.com/AI-Sustainable-Development-Goals-Everything/dp/1032044063">https://www.amazon.com/AI-Sustainable-Development-Goals-Everything/dp/1032044063</a></li><li>Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism (Book): <a href="https://www.amazon.com/Technology-Sustainable-Development-Pitfalls-Techno-Solutionism-ebook/dp/B0C17RBTVL">https://www.amazon.com/Technology-Sustainable-Development-Pitfalls-Techno-Solutionism-ebook/dp/B0C17RBTVL</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep79/transcript">here</a>.   </p>]]>
      </content:encoded>
      <pubDate>Wed, 17 Sep 2025 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/98d209df/250378c7.mp3" length="51934814" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/W9lolny4zeSsuGUyTgCx5i1NvWpEm6Vr6YpeYai4Jt0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wZDlh/NTU3YTc1NzZjOTRl/M2NiZDA1NjZkYWMx/NTkzMi5qcGc.jpg"/>
      <itunes:duration>3244</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/henriksaetra/">Henrik Skaug Sætra</a> considers the basis of democracy, the nature of politics, the tilt toward digital sovereignty and what role AI plays in our collective human society. </p><p><br>Henrik and Kimberly discuss AI’s impact on human comprehension and communication; core democratic competencies at risk; politics as a joint human endeavor; conflating citizens with customers; productively messy processes; the problem of democracy; how AI could change what democracy means; whether democracy is computable; Google’s experiments in democratic AI; AI and digital sovereignty; and a multidisciplinary path forward.  </p><p> </p><p><a href="https://www.linkedin.com/in/henriksaetra/">Henrik Skaug Sætra</a> is an Associate Professor of Sustainable Digitalisation and Head of the <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.mn.uio.no%2Fifi%2Fenglish%2Fresearch%2Fgroups%2Frt%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NjdkZDA3ZWM4NmRmNTcyOWRjMmQ4NjhkOGZjNGZjZmQ6Nzo2YTEzOmQwM2U0YmE5ZjFjNTc2MzQ1OGI3YzVlMmM1NmJmMmM3OWUxYmQzNjIzODhiZDliNmUzOTE5OTc1OTRmMjA5MDQ6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cb4e50bc409b14b472f0708dda36f720e%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638846421684261126%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=pNi%2BPL66nVMdI%2BEGq3hZHYzj9IqM6ZPRCHoKuxAPIMk%3D&amp;reserved=0">Technology and Sustainable Futures</a> research group at Oslo University. He is also the CEO of <a href="https://www.pathwais.eu/">Pathwais.eu</a> connecting strategy, uncertainty, and action through scenario-based risk management.</p><p><br>Related Resources</p><ul><li>Google Scholar Profile: <a href="https://scholar.google.com/citations?user=pvgdIpUAAAAJ&amp;hl=en">https://scholar.google.com/citations?user=pvgdIpUAAAAJ&amp;hl=en</a></li><li>How to Save Democracy from AI (Book – Norwegian): <a href="https://www.norli.no/9788202853686">https://www.norli.no/9788202853686</a></li><li>AI for the Sustainable Development Goals (Book): <a href="https://www.amazon.com/AI-Sustainable-Development-Goals-Everything/dp/1032044063">https://www.amazon.com/AI-Sustainable-Development-Goals-Everything/dp/1032044063</a></li><li>Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism (Book): <a href="https://www.amazon.com/Technology-Sustainable-Development-Pitfalls-Techno-Solutionism-ebook/dp/B0C17RBTVL">https://www.amazon.com/Technology-Sustainable-Development-Pitfalls-Techno-Solutionism-ebook/dp/B0C17RBTVL</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep79/transcript">here</a>.   </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/henrik-skaug-saetra-f889897b-40a7-4829-be06-4e4269ca7cf4" img="https://img.transistorcdn.com/hX0KmwgA12yFs1-tvJ_OC1mdQMfLTQT0EVc5GnePgP4/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYThl/YjA3OWVkZWZlYzRl/YmJkZjE4ZTAzODc3/OTdiNC5qcGc.jpg">Henrik Skaug Sætra</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/98d209df/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Generating Safety Not Abuse with Dr. Rebecca Portnoff </title>
      <itunes:episode>78</itunes:episode>
      <podcast:episode>78</podcast:episode>
      <itunes:title>Generating Safety Not Abuse with Dr. Rebecca Portnoff </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">19cb4690-9645-42d6-b8d8-45970a67e15c</guid>
      <link>https://share.transistor.fm/s/367fa72f</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/dr-rsportnoff/">Dr. Rebecca Portnoff</a> generates awareness of the threat landscape, enablers, challenges and solutions to the complex but addressable issue of online child sexual abuse.  </p><p>Rebecca and Kimberly discuss trends in online child sexual abuse; pillars of impact and harm; how GenAI expands the threat landscape; personalized targeting and bespoke abuse; Thorn’s Safety by Design Initiative; scalable prevention strategies; technical and legal barriers; standards, consensus and commitment; building better from the beginning; accountability as an innovative goal; and not confusing complex with unsolvable.  </p><p><a href="https://www.linkedin.com/in/dr-rsportnoff/">Dr. Rebecca Portnoff</a> is the Vice President of Data Science at <a href="https://thorn.org/">Thorn</a>, a non-profit dedicated to protecting children from sexual abuse. Read Thorn’s seminal <a href="https://info.thorn.orhttps/info.thorn.org/hubfs/thorn-safety-by-design-for-generative-AI.pdfg/hubfs/thorn-safety-by-design-for-generative-AI.pdf">Safety by Design paper</a>, bookmark the <a href="https://www.thorn.org/research/">Research Center</a> to stay updated and support Thorn’s critical work by donating <a href="https://thorn.org/support/donate">here</a>. </p><p>Related Resources </p><ul><li>Thorn’s Safety by Design Initiative (News): <a href="https://www.thorn.org/blog/generative-ai-principles/">https://www.thorn.org/blog/generative-ai-principles/</a>  </li><li>Safety by Design Progress Reports: <a href="https://www.thorn.org/blog/thorns-safety-by-design-for-generative-ai-progress-reports/">https://www.thorn.org/blog/thorns-safety-by-design-for-generative-ai-progress-reports/</a>  </li><li>Thorn + SIO AIG-CSAM Research (Report): <a href="https://cyber.fsi.stanford.edu/io/news/ml-csam-report">https://cyber.fsi.stanford.edu/io/news/ml-csam-report</a>  </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep78/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/dr-rsportnoff/">Dr. Rebecca Portnoff</a> generates awareness of the threat landscape, enablers, challenges and solutions to the complex but addressable issue of online child sexual abuse.  </p><p>Rebecca and Kimberly discuss trends in online child sexual abuse; pillars of impact and harm; how GenAI expands the threat landscape; personalized targeting and bespoke abuse; Thorn’s Safety by Design Initiative; scalable prevention strategies; technical and legal barriers; standards, consensus and commitment; building better from the beginning; accountability as an innovative goal; and not confusing complex with unsolvable.  </p><p><a href="https://www.linkedin.com/in/dr-rsportnoff/">Dr. Rebecca Portnoff</a> is the Vice President of Data Science at <a href="https://thorn.org/">Thorn</a>, a non-profit dedicated to protecting children from sexual abuse. Read Thorn’s seminal <a href="https://info.thorn.orhttps/info.thorn.org/hubfs/thorn-safety-by-design-for-generative-AI.pdfg/hubfs/thorn-safety-by-design-for-generative-AI.pdf">Safety by Design paper</a>, bookmark the <a href="https://www.thorn.org/research/">Research Center</a> to stay updated and support Thorn’s critical work by donating <a href="https://thorn.org/support/donate">here</a>. </p><p>Related Resources </p><ul><li>Thorn’s Safety by Design Initiative (News): <a href="https://www.thorn.org/blog/generative-ai-principles/">https://www.thorn.org/blog/generative-ai-principles/</a>  </li><li>Safety by Design Progress Reports: <a href="https://www.thorn.org/blog/thorns-safety-by-design-for-generative-ai-progress-reports/">https://www.thorn.org/blog/thorns-safety-by-design-for-generative-ai-progress-reports/</a>  </li><li>Thorn + SIO AIG-CSAM Research (Report): <a href="https://cyber.fsi.stanford.edu/io/news/ml-csam-report">https://cyber.fsi.stanford.edu/io/news/ml-csam-report</a>  </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep78/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 20 Aug 2025 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/367fa72f/4a58f0c0.mp3" length="44755283" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/LHniPDbq1iwWoxHJwmtAA-M8OhS3ucq9Bi7KHBVEvGY/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80ZDJj/MTU0N2Q5NGUzYjFh/OWY5YzdjYzY5NDQ0/MTY0My5qcGc.jpg"/>
      <itunes:duration>2795</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/dr-rsportnoff/">Dr. Rebecca Portnoff</a> generates awareness of the threat landscape, enablers, challenges and solutions to the complex but addressable issue of online child sexual abuse.  </p><p>Rebecca and Kimberly discuss trends in online child sexual abuse; pillars of impact and harm; how GenAI expands the threat landscape; personalized targeting and bespoke abuse; Thorn’s Safety by Design Initiative; scalable prevention strategies; technical and legal barriers; standards, consensus and commitment; building better from the beginning; accountability as an innovative goal; and not confusing complex with unsolvable.  </p><p><a href="https://www.linkedin.com/in/dr-rsportnoff/">Dr. Rebecca Portnoff</a> is the Vice President of Data Science at <a href="https://thorn.org/">Thorn</a>, a non-profit dedicated to protecting children from sexual abuse. Read Thorn’s seminal <a href="https://info.thorn.orhttps/info.thorn.org/hubfs/thorn-safety-by-design-for-generative-AI.pdfg/hubfs/thorn-safety-by-design-for-generative-AI.pdf">Safety by Design paper</a>, bookmark the <a href="https://www.thorn.org/research/">Research Center</a> to stay updated and support Thorn’s critical work by donating <a href="https://thorn.org/support/donate">here</a>. </p><p>Related Resources </p><ul><li>Thorn’s Safety by Design Initiative (News): <a href="https://www.thorn.org/blog/generative-ai-principles/">https://www.thorn.org/blog/generative-ai-principles/</a>  </li><li>Safety by Design Progress Reports: <a href="https://www.thorn.org/blog/thorns-safety-by-design-for-generative-ai-progress-reports/">https://www.thorn.org/blog/thorns-safety-by-design-for-generative-ai-progress-reports/</a>  </li><li>Thorn + SIO AIG-CSAM Research (Report): <a href="https://cyber.fsi.stanford.edu/io/news/ml-csam-report">https://cyber.fsi.stanford.edu/io/news/ml-csam-report</a>  </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep78/transcript">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/dr-rebecca-portnoff" img="https://img.transistorcdn.com/X1kOhegbLfToVIWgwcmb6M7nOb3wHxrBktFM__SKwn8/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85N2Zi/Nzg3MTY4Y2EwZmVh/NWFhYzcwNzVhZDEx/YWM5NC5qcGc.jpg">Dr. Rebecca Portnoff</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/367fa72f/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Inclusive Innovation with Hiwot Tesfaye</title>
      <itunes:episode>77</itunes:episode>
      <podcast:episode>77</podcast:episode>
      <itunes:title>Inclusive Innovation with Hiwot Tesfaye</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4744c9c9-c976-49a9-9527-5b4f63abfe2f</guid>
      <link>https://share.transistor.fm/s/ff7f07ad</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/hiwottesfaye/">Hiwot Tesfaye</a> disputes the notion of AI givers and takers, challenges innovation as an import, highlights untapped global potential, and charts a more inclusive course. </p><p><br></p><p>Hiwot and Kimberly discuss the two camps myth of inclusivity; finding innovation everywhere; meaningful AI adoption and diffusion; limitations of imported AI; digital colonialism; low-resource languages and illiterate LLMs; an Icelandic success story; situating AI in time and place; employment over automation; capacity and skill building; skeptical delight and making the case for multi-lingual, multi-cultural AI. </p><p><a href="https://www.linkedin.com/in/hiwottesfaye/">Hiwot Tesfaye</a> is a Technical Advisor in Microsoft’s Office of Responsible AI and a Loomis Council Member at the Stimson Center where she helped launch the <a href="https://www.stimson.org/project/responsible-ai-fellowship/">Global Perspectives: Responsible AI Fellowship</a>. </p><p> </p><p>Related Resources</p><ul><li><a href="https://www.youtube.com/watch?v=b4m5U88zF60">#35 Navigating AI: Ethical Challenges and Opportunities a conversation with Hiwot Tesfaye</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep77/transcript">here</a>.   </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/hiwottesfaye/">Hiwot Tesfaye</a> disputes the notion of AI givers and takers, challenges innovation as an import, highlights untapped global potential, and charts a more inclusive course. </p><p><br></p><p>Hiwot and Kimberly discuss the two camps myth of inclusivity; finding innovation everywhere; meaningful AI adoption and diffusion; limitations of imported AI; digital colonialism; low-resource languages and illiterate LLMs; an Icelandic success story; situating AI in time and place; employment over automation; capacity and skill building; skeptical delight and making the case for multi-lingual, multi-cultural AI. </p><p><a href="https://www.linkedin.com/in/hiwottesfaye/">Hiwot Tesfaye</a> is a Technical Advisor in Microsoft’s Office of Responsible AI and a Loomis Council Member at the Stimson Center where she helped launch the <a href="https://www.stimson.org/project/responsible-ai-fellowship/">Global Perspectives: Responsible AI Fellowship</a>. </p><p> </p><p>Related Resources</p><ul><li><a href="https://www.youtube.com/watch?v=b4m5U88zF60">#35 Navigating AI: Ethical Challenges and Opportunities a conversation with Hiwot Tesfaye</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep77/transcript">here</a>.   </p>]]>
      </content:encoded>
      <pubDate>Wed, 06 Aug 2025 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/ff7f07ad/37b848d4.mp3" length="48792120" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/4BcEk_j2Lskp-OMJvdKmKYCE4JWi9DrTtHeNIW0Wx_4/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81YTc1/YTViZDJlNjBmOThh/OTc0OTMzZmQwZmRh/ZjI5Mi5qcGc.jpg"/>
      <itunes:duration>3048</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/hiwottesfaye/">Hiwot Tesfaye</a> disputes the notion of AI givers and takers, challenges innovation as an import, highlights untapped global potential, and charts a more inclusive course. </p><p><br></p><p>Hiwot and Kimberly discuss the two camps myth of inclusivity; finding innovation everywhere; meaningful AI adoption and diffusion; limitations of imported AI; digital colonialism; low-resource languages and illiterate LLMs; an Icelandic success story; situating AI in time and place; employment over automation; capacity and skill building; skeptical delight and making the case for multi-lingual, multi-cultural AI. </p><p><a href="https://www.linkedin.com/in/hiwottesfaye/">Hiwot Tesfaye</a> is a Technical Advisor in Microsoft’s Office of Responsible AI and a Loomis Council Member at the Stimson Center where she helped launch the <a href="https://www.stimson.org/project/responsible-ai-fellowship/">Global Perspectives: Responsible AI Fellowship</a>. </p><p> </p><p>Related Resources</p><ul><li><a href="https://www.youtube.com/watch?v=b4m5U88zF60">#35 Navigating AI: Ethical Challenges and Opportunities a conversation with Hiwot Tesfaye</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep77/transcript">here</a>.   </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/hiwot-tesfaye" img="https://img.transistorcdn.com/UaMJQLFn4PGPmlLnoeGmV_C4mt_R2Ag2D82q9VbtCUQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81M2Fh/NWFkMzgwNzdmNGFj/OTRjMDU0NDVlMTJh/OTRiZC5qcGc.jpg">Hiwot Tesfaye</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/ff7f07ad/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Shape of Synthetic Data with Dietmar Offenhuber </title>
      <itunes:episode>76</itunes:episode>
      <podcast:episode>76</podcast:episode>
      <itunes:title>The Shape of Synthetic Data with Dietmar Offenhuber </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5d32cc92-8482-4f28-a096-5882493b5709</guid>
      <link>https://share.transistor.fm/s/7adba68f</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/dietmar-offenhuber-aa3369/">Dietmar Offenhuber</a> reflects on synthetic data’s break from reality, relates meaning to material use, and embraces data as a speculative and often non-digital artifact.  </p><p>Dietmar and Kimberly discuss data as a representation of reality; divorcing content from meaning; data settings vs. data sets; synthetic data quality and ground truth; data as a speculative artifact; the value in noise; data materiality and accountability; rethinking data literacy; Instagram data realities; non-digital computing and going beyond statistical analysis.  </p><p><a href="https://www.linkedin.com/in/dietmar-offenhuber-aa3369/">Dietmar Offenhuber</a> is a Professor and Department Chair of Art+Design at Northeastern University. Dietmar researches the material, sensory and social implications of environmental information and evidence construction.  </p><p>Related Resources </p><ul><li>Shapes and Frictions of Synthetic Data (paper): <a href="https://journals.sagepub.com/doi/10.1177/20539517241249390">https://journals.sagepub.com/doi/10.1177/20539517241249390</a>  </li><li>Autographic Design: The Matter of Data in a Self-Inscribing World (book): <a href="https://autographic.design/">https://autographic.design/</a>  </li><li>Reservoirs of Venice (project): <a href="https://res-venice.github.io/">https://res-venice.github.io/</a> </li><li>Website: <a href="https://offenhuber.net/">https://offenhuber.net/</a> </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep76/transcript">here</a>.    </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/dietmar-offenhuber-aa3369/">Dietmar Offenhuber</a> reflects on synthetic data’s break from reality, relates meaning to material use, and embraces data as a speculative and often non-digital artifact.  </p><p>Dietmar and Kimberly discuss data as a representation of reality; divorcing content from meaning; data settings vs. data sets; synthetic data quality and ground truth; data as a speculative artifact; the value in noise; data materiality and accountability; rethinking data literacy; Instagram data realities; non-digital computing and going beyond statistical analysis.  </p><p><a href="https://www.linkedin.com/in/dietmar-offenhuber-aa3369/">Dietmar Offenhuber</a> is a Professor and Department Chair of Art+Design at Northeastern University. Dietmar researches the material, sensory and social implications of environmental information and evidence construction.  </p><p>Related Resources </p><ul><li>Shapes and Frictions of Synthetic Data (paper): <a href="https://journals.sagepub.com/doi/10.1177/20539517241249390">https://journals.sagepub.com/doi/10.1177/20539517241249390</a>  </li><li>Autographic Design: The Matter of Data in a Self-Inscribing World (book): <a href="https://autographic.design/">https://autographic.design/</a>  </li><li>Reservoirs of Venice (project): <a href="https://res-venice.github.io/">https://res-venice.github.io/</a> </li><li>Website: <a href="https://offenhuber.net/">https://offenhuber.net/</a> </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep76/transcript">here</a>.    </p>]]>
      </content:encoded>
      <pubDate>Wed, 23 Jul 2025 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/7adba68f/bac5e03f.mp3" length="50089265" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/qDMxCzbOyglSGNdwAclu7O3Iy7NZXv2nDRigB_XwhRQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wZjY4/OTg3NzY5ZGY0NjFh/NmFhYjAwMjc0OTdm/Nzc0Zi5qcGc.jpg"/>
      <itunes:duration>3127</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/dietmar-offenhuber-aa3369/">Dietmar Offenhuber</a> reflects on synthetic data’s break from reality, relates meaning to material use, and embraces data as a speculative and often non-digital artifact.  </p><p>Dietmar and Kimberly discuss data as a representation of reality; divorcing content from meaning; data settings vs. data sets; synthetic data quality and ground truth; data as a speculative artifact; the value in noise; data materiality and accountability; rethinking data literacy; Instagram data realities; non-digital computing and going beyond statistical analysis.  </p><p><a href="https://www.linkedin.com/in/dietmar-offenhuber-aa3369/">Dietmar Offenhuber</a> is a Professor and Department Chair of Art+Design at Northeastern University. Dietmar researches the material, sensory and social implications of environmental information and evidence construction.  </p><p>Related Resources </p><ul><li>Shapes and Frictions of Synthetic Data (paper): <a href="https://journals.sagepub.com/doi/10.1177/20539517241249390">https://journals.sagepub.com/doi/10.1177/20539517241249390</a>  </li><li>Autographic Design: The Matter of Data in a Self-Inscribing World (book): <a href="https://autographic.design/">https://autographic.design/</a>  </li><li>Reservoirs of Venice (project): <a href="https://res-venice.github.io/">https://res-venice.github.io/</a> </li><li>Website: <a href="https://offenhuber.net/">https://offenhuber.net/</a> </li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep76/transcript">here</a>.    </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/dietmar-offenhuber-phd" img="https://img.transistorcdn.com/EJEf_5XhSlLud3RNDGI2rp-R4AZzHjY2FKKXMwzsb1g/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jYjc3/ZGNiMmViMTRmODJj/NWExNzcyMDZhN2Ix/M2M2NS5qcGc.jpg">Dietmar Offenhuber, PhD</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/7adba68f/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>A Question of Humanity with Pia Lauritzen, PhD</title>
      <itunes:episode>75</itunes:episode>
      <podcast:episode>75</podcast:episode>
      <itunes:title>A Question of Humanity with Pia Lauritzen, PhD</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">245734f8-06e4-4788-9376-b89f750d4e36</guid>
      <link>https://share.transistor.fm/s/ed8b6435</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/pia-lauritzen/">Pia Lauritzen</a> questions our use of questions, the nature of humanity, the premise of AGI, the essence of tech, if humans can be optimized and why thinking is required. </p><p><br>Pia and Kimberly discuss the function of questions, curiosity as a basic human feature, AI as an answer machine, why humans think, the contradiction at the heart of AGI, grappling with the three big Es, the fallacy of human optimization, respecting humanity, Heidegger’s eerily precise predictions, the skill of critical thinking, and why it’s not really about the questions at all. </p><p><br><a href="https://www.linkedin.com/in/pia-lauritzen/">Pia Lauritzen, PhD</a> is a philosopher, author and tech inventor asking big questions about tech and transformation. As the CEO and Founder of <a href="https://sasoffice365.sharepoint.com/sites/BrandActivation320/Shared%20Documents/GTM%20Workstream/Pondering%20AI%20Podcast/Season%206%20-%20Ongoing/Ep75%20Pia%20Lauritzen/qvest.io">Qvest</a> and a <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fthinkers50.com%2Fbiographies%2Fpia-lauritzen%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NmUzNTY3YmI2M2YyMWUzMjU2MDJkYTIyZjFjYmVmMmI6Nzo2Yzg3OjUyNzQ4YTBiMjJkZTZkZjMwOWExM2YyNTU3MWE0NjFhMzZlNTFiMzMwMDA0YjU5NjViMzZjYjQzZmE3ZTg5ZmQ6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Ce9ab4ffab9894b1e8f1708dda9e821b6%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638853536688597912%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=gAJtjx9R8ZMONZ3y4iE%2BWcT2tsRb%2F%2BHH59O6zDJKYac%3D&amp;reserved=0">Thinkers50</a> Radar Member Pia is on a mission to democratize the power of questions. </p><p><br>Related Resources</p><ul><li>Questions (Book): <a href="https://www.press.jhu.edu/books/title/23069/questions">https://www.press.jhu.edu/books/title/23069/questions</a> </li><li>TEDx Talk: <a href="https://www.ted.com/talks/pia_lauritzen_what_you_don_t_know_about_questions">https://www.ted.com/talks/pia_lauritzen_what_you_don_t_know_about_questions</a> </li><li>Question Jam: <a href="http://www.questionjam.com">www.questionjam.com</a></li><li>Forbes Column: <a href="http://www.forbes.com/sites/pialauritzen">forbes.com/sites/pialauritzen</a> </li><li>LinkedIn Learning: <a href="http://www.Linkedin.com/learning/pialauritzen">www.Linkedin.com/learning/pialauritzen</a> </li><li>Personal Website: <a href="http://www.pialauritzen.dk/">pialauritzen.dk </a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep75/transcript">here</a>.   </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/pia-lauritzen/">Pia Lauritzen</a> questions our use of questions, the nature of humanity, the premise of AGI, the essence of tech, if humans can be optimized and why thinking is required. </p><p><br>Pia and Kimberly discuss the function of questions, curiosity as a basic human feature, AI as an answer machine, why humans think, the contradiction at the heart of AGI, grappling with the three big Es, the fallacy of human optimization, respecting humanity, Heidegger’s eerily precise predictions, the skill of critical thinking, and why it’s not really about the questions at all. </p><p><br><a href="https://www.linkedin.com/in/pia-lauritzen/">Pia Lauritzen, PhD</a> is a philosopher, author and tech inventor asking big questions about tech and transformation. As the CEO and Founder of <a href="https://sasoffice365.sharepoint.com/sites/BrandActivation320/Shared%20Documents/GTM%20Workstream/Pondering%20AI%20Podcast/Season%206%20-%20Ongoing/Ep75%20Pia%20Lauritzen/qvest.io">Qvest</a> and a <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fthinkers50.com%2Fbiographies%2Fpia-lauritzen%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NmUzNTY3YmI2M2YyMWUzMjU2MDJkYTIyZjFjYmVmMmI6Nzo2Yzg3OjUyNzQ4YTBiMjJkZTZkZjMwOWExM2YyNTU3MWE0NjFhMzZlNTFiMzMwMDA0YjU5NjViMzZjYjQzZmE3ZTg5ZmQ6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Ce9ab4ffab9894b1e8f1708dda9e821b6%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638853536688597912%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=gAJtjx9R8ZMONZ3y4iE%2BWcT2tsRb%2F%2BHH59O6zDJKYac%3D&amp;reserved=0">Thinkers50</a> Radar Member Pia is on a mission to democratize the power of questions. </p><p><br>Related Resources</p><ul><li>Questions (Book): <a href="https://www.press.jhu.edu/books/title/23069/questions">https://www.press.jhu.edu/books/title/23069/questions</a> </li><li>TEDx Talk: <a href="https://www.ted.com/talks/pia_lauritzen_what_you_don_t_know_about_questions">https://www.ted.com/talks/pia_lauritzen_what_you_don_t_know_about_questions</a> </li><li>Question Jam: <a href="http://www.questionjam.com">www.questionjam.com</a></li><li>Forbes Column: <a href="http://www.forbes.com/sites/pialauritzen">forbes.com/sites/pialauritzen</a> </li><li>LinkedIn Learning: <a href="http://www.Linkedin.com/learning/pialauritzen">www.Linkedin.com/learning/pialauritzen</a> </li><li>Personal Website: <a href="http://www.pialauritzen.dk/">pialauritzen.dk </a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep75/transcript">here</a>.   </p>]]>
      </content:encoded>
      <pubDate>Wed, 09 Jul 2025 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/ed8b6435/f224da07.mp3" length="53589461" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/mKxrEHWNqHRQpGtHjmtfJbjwkhPdS_wbwRI_BNcGr_g/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84N2E2/YzczYjM2MTgyZGQ4/MTU3NDk1NTE3N2Nk/ZTBlMy5qcGc.jpg"/>
      <itunes:duration>3348</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/pia-lauritzen/">Pia Lauritzen</a> questions our use of questions, the nature of humanity, the premise of AGI, the essence of tech, if humans can be optimized and why thinking is required. </p><p><br>Pia and Kimberly discuss the function of questions, curiosity as a basic human feature, AI as an answer machine, why humans think, the contradiction at the heart of AGI, grappling with the three big Es, the fallacy of human optimization, respecting humanity, Heidegger’s eerily precise predictions, the skill of critical thinking, and why it’s not really about the questions at all. </p><p><br><a href="https://www.linkedin.com/in/pia-lauritzen/">Pia Lauritzen, PhD</a> is a philosopher, author and tech inventor asking big questions about tech and transformation. As the CEO and Founder of <a href="https://sasoffice365.sharepoint.com/sites/BrandActivation320/Shared%20Documents/GTM%20Workstream/Pondering%20AI%20Podcast/Season%206%20-%20Ongoing/Ep75%20Pia%20Lauritzen/qvest.io">Qvest</a> and a <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fthinkers50.com%2Fbiographies%2Fpia-lauritzen%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NmUzNTY3YmI2M2YyMWUzMjU2MDJkYTIyZjFjYmVmMmI6Nzo2Yzg3OjUyNzQ4YTBiMjJkZTZkZjMwOWExM2YyNTU3MWE0NjFhMzZlNTFiMzMwMDA0YjU5NjViMzZjYjQzZmE3ZTg5ZmQ6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Ce9ab4ffab9894b1e8f1708dda9e821b6%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638853536688597912%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=gAJtjx9R8ZMONZ3y4iE%2BWcT2tsRb%2F%2BHH59O6zDJKYac%3D&amp;reserved=0">Thinkers50</a> Radar Member Pia is on a mission to democratize the power of questions. </p><p><br>Related Resources</p><ul><li>Questions (Book): <a href="https://www.press.jhu.edu/books/title/23069/questions">https://www.press.jhu.edu/books/title/23069/questions</a> </li><li>TEDx Talk: <a href="https://www.ted.com/talks/pia_lauritzen_what_you_don_t_know_about_questions">https://www.ted.com/talks/pia_lauritzen_what_you_don_t_know_about_questions</a> </li><li>Question Jam: <a href="http://www.questionjam.com">www.questionjam.com</a></li><li>Forbes Column: <a href="http://www.forbes.com/sites/pialauritzen">forbes.com/sites/pialauritzen</a> </li><li>LinkedIn Learning: <a href="http://www.Linkedin.com/learning/pialauritzen">www.Linkedin.com/learning/pialauritzen</a> </li><li>Personal Website: <a href="http://www.pialauritzen.dk/">pialauritzen.dk </a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep75/transcript">here</a>.   </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/pia-lauritzen" img="https://img.transistorcdn.com/eDmwAzjf8tlPTUIMMOzf5J_sTAEiGuGpZnCCam-D3X0/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lZjQy/MmY5NTYxYjk3N2Vk/MDRiOWRjMzFhMGQ3/MWE4OC5wbmc.jpg">Pia Lauritzen</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/ed8b6435/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>A Healthier AI Narrative with Michael Strange</title>
      <itunes:episode>74</itunes:episode>
      <podcast:episode>74</podcast:episode>
      <itunes:title>A Healthier AI Narrative with Michael Strange</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9607ad9c-e622-4a3e-968d-07eae625b782</guid>
      <link>https://share.transistor.fm/s/3b9d75b0</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/michael-strange-83a6a45/">Michael Strange</a> has a healthy appreciation for complexity, diagnoses hype as antithetical to innovation and prescribes an interdisciplinary approach to making AI well.  </p><p>Michael and Kimberly discuss whether AI is good for healthcare; healthcare as a global system; radical shifts precipitated by the pandemic; why hype stifles nuance and innovation; how science works; the complexity of the human condition; human well-being vs. health; the limits of quantification; who is missing in healthcare and health data; the political-economy and material impacts of AI as infrastructure; the doctor in the loophole; the humility required to design healthy AI tools and create a resilient, holistic healthcare system. </p><p><a href="https://www.linkedin.com/in/michael-strange-83a6a45/">Michael Strange</a> is an Associate Professor in the Dept of Global Political Affairs at Malmö University focusing on core questions of political agency and democratic engagement. In this context he works on Artificial Intelligence, health, trade, and migration. Michael directed the Precision Health &amp; Everyday Democracy (PHED) Commission and serves on the board of two research centres: Citizen Health and the ICF (Imagining and Co-creating Futures). </p><p>Related Resources </p><ul><li>If AI is to Heal Our Healthcare Systems, We Need to Redesign How AI Is Developed (article): <a href="https://www.techpolicy.press/if-ai-is-to-heal-our-healthcare-systems-we-need-to-redesign-how-ai-itself-is-developed/">https://www.techpolicy.press/if-ai-is-to-heal-our-healthcare-systems-we-need-to-redesign-how-ai-itself-is-developed/</a> </li><li>Beyond ‘Our product is trusted!’ – A processual approach to trust in AI healthcare (paper) <a href="https://mau.diva-portal.org/smash/record.jsf?pid=diva2%3A1914539">https://mau.diva-portal.org/smash/record.jsf?pid=diva2%3A1914539</a> </li><li>Michael Strange (website): <a href="https://mau.se/en/persons/michael.strange/">https://mau.se/en/persons/michael.strange/</a> </li></ul><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep74/transcript">here</a>.    </p><p> </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/michael-strange-83a6a45/">Michael Strange</a> has a healthy appreciation for complexity, diagnoses hype as antithetical to innovation and prescribes an interdisciplinary approach to making AI well.  </p><p>Michael and Kimberly discuss whether AI is good for healthcare; healthcare as a global system; radical shifts precipitated by the pandemic; why hype stifles nuance and innovation; how science works; the complexity of the human condition; human well-being vs. health; the limits of quantification; who is missing in healthcare and health data; the political-economy and material impacts of AI as infrastructure; the doctor in the loophole; the humility required to design healthy AI tools and create a resilient, holistic healthcare system. </p><p><a href="https://www.linkedin.com/in/michael-strange-83a6a45/">Michael Strange</a> is an Associate Professor in the Dept of Global Political Affairs at Malmö University focusing on core questions of political agency and democratic engagement. In this context he works on Artificial Intelligence, health, trade, and migration. Michael directed the Precision Health &amp; Everyday Democracy (PHED) Commission and serves on the board of two research centres: Citizen Health and the ICF (Imagining and Co-creating Futures). </p><p>Related Resources </p><ul><li>If AI is to Heal Our Healthcare Systems, We Need to Redesign How AI Is Developed (article): <a href="https://www.techpolicy.press/if-ai-is-to-heal-our-healthcare-systems-we-need-to-redesign-how-ai-itself-is-developed/">https://www.techpolicy.press/if-ai-is-to-heal-our-healthcare-systems-we-need-to-redesign-how-ai-itself-is-developed/</a> </li><li>Beyond ‘Our product is trusted!’ – A processual approach to trust in AI healthcare (paper) <a href="https://mau.diva-portal.org/smash/record.jsf?pid=diva2%3A1914539">https://mau.diva-portal.org/smash/record.jsf?pid=diva2%3A1914539</a> </li><li>Michael Strange (website): <a href="https://mau.se/en/persons/michael.strange/">https://mau.se/en/persons/michael.strange/</a> </li></ul><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep74/transcript">here</a>.    </p><p> </p>]]>
      </content:encoded>
      <pubDate>Wed, 25 Jun 2025 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/3b9d75b0/587202f3.mp3" length="57485707" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/lkjbLkFa63y-8VMWqR8ay1_LMgJC_mQVVTtQi8o3LX8/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xZDg5/ZDkzOTQ2ODc3YWJj/OWY4NWI3MWVhY2M4/NzQ5My5qcGc.jpg"/>
      <itunes:duration>3591</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/michael-strange-83a6a45/">Michael Strange</a> has a healthy appreciation for complexity, diagnoses hype as antithetical to innovation and prescribes an interdisciplinary approach to making AI well.  </p><p>Michael and Kimberly discuss whether AI is good for healthcare; healthcare as a global system; radical shifts precipitated by the pandemic; why hype stifles nuance and innovation; how science works; the complexity of the human condition; human well-being vs. health; the limits of quantification; who is missing in healthcare and health data; the political-economy and material impacts of AI as infrastructure; the doctor in the loophole; the humility required to design healthy AI tools and create a resilient, holistic healthcare system. </p><p><a href="https://www.linkedin.com/in/michael-strange-83a6a45/">Michael Strange</a> is an Associate Professor in the Dept of Global Political Affairs at Malmö University focusing on core questions of political agency and democratic engagement. In this context he works on Artificial Intelligence, health, trade, and migration. Michael directed the Precision Health &amp; Everyday Democracy (PHED) Commission and serves on the board of two research centres: Citizen Health and the ICF (Imagining and Co-creating Futures). </p><p>Related Resources </p><ul><li>If AI is to Heal Our Healthcare Systems, We Need to Redesign How AI Is Developed (article): <a href="https://www.techpolicy.press/if-ai-is-to-heal-our-healthcare-systems-we-need-to-redesign-how-ai-itself-is-developed/">https://www.techpolicy.press/if-ai-is-to-heal-our-healthcare-systems-we-need-to-redesign-how-ai-itself-is-developed/</a> </li><li>Beyond ‘Our product is trusted!’ – A processual approach to trust in AI healthcare (paper) <a href="https://mau.diva-portal.org/smash/record.jsf?pid=diva2%3A1914539">https://mau.diva-portal.org/smash/record.jsf?pid=diva2%3A1914539</a> </li><li>Michael Strange (website): <a href="https://mau.se/en/persons/michael.strange/">https://mau.se/en/persons/michael.strange/</a> </li></ul><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep74/transcript">here</a>.    </p><p> </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/dr-michael-strange" img="https://img.transistorcdn.com/6P1vFuL50GSbPPMEyaQwDLNm55fProCxDXzkJpo_qms/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85Mjhk/ZGUzYmIzYTI2MDFl/ODMxZjEwNmFmOTU4/YjQyNy5qcGVn.jpg">Dr. Michael Strange</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/3b9d75b0/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>LLMs Are Useful Liars with Andriy Burkov</title>
      <itunes:episode>73</itunes:episode>
      <podcast:episode>73</podcast:episode>
      <itunes:title>LLMs Are Useful Liars with Andriy Burkov</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1b8a1d02-f2cf-4a35-8200-e5f8c4ea8652</guid>
      <link>https://share.transistor.fm/s/707c83a3</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/andriyburkov/">Andriy Burkov</a> talks down dishonest hype and sets realistic expectations for when LLMs, if properly and critically applied, are useful. Although maybe not as AI agents.  </p><p>Andriy and Kimberly discuss how he uses LLMs as an author; LLMs as unapologetic liars; how opaque training data impacts usability; not knowing if LLMs will save time or waste it; error-prone domains; when language fluency is useless; how expertise maximizes benefit; when some idea is better than no idea; limits of RAG; how LLMs go off the rails; why prompt engineering is not enough; using LLMs for rapid prototyping; and whether language models make good AI agents (in the strictest sense of the word). </p><p><a href="https://www.linkedin.com/in/andriyburkov/">Andriy Burkov</a> holds a PhD in Artificial Intelligence and is the author of The Hundred Page Machine Learning and Language Models books. His <a href="https://www.linkedin.com/newsletters/artificial-intelligence-6598352935271358464/">Artificial Intelligence Newsletter</a> reaches 870,000+ subscribers. Andriy was previously the Machine Learning Lead at Talent Neuron and the Director of Data Science (ML) at Gartner. He has never been a Ukrainian footballer. </p><p>Related Resources </p><ul><li>The Hundred Page Language Models Book: <a href="https://thelmbook.com/">https://thelmbook.com/</a> </li><li>The Hundred Page Machine Learning Book: <a href="https://themlbook.com/">https://themlbook.com/</a>  </li><li>True Positive Weekly (newsletter): <a href="https://aiweekly.substack.com/">https://aiweekly.substack.com/</a> </li></ul><p> A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep73/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/andriyburkov/">Andriy Burkov</a> talks down dishonest hype and sets realistic expectations for when LLMs, if properly and critically applied, are useful. Although maybe not as AI agents.  </p><p>Andriy and Kimberly discuss how he uses LLMs as an author; LLMs as unapologetic liars; how opaque training data impacts usability; not knowing if LLMs will save time or waste it; error-prone domains; when language fluency is useless; how expertise maximizes benefit; when some idea is better than no idea; limits of RAG; how LLMs go off the rails; why prompt engineering is not enough; using LLMs for rapid prototyping; and whether language models make good AI agents (in the strictest sense of the word). </p><p><a href="https://www.linkedin.com/in/andriyburkov/">Andriy Burkov</a> holds a PhD in Artificial Intelligence and is the author of The Hundred Page Machine Learning and Language Models books. His <a href="https://www.linkedin.com/newsletters/artificial-intelligence-6598352935271358464/">Artificial Intelligence Newsletter</a> reaches 870,000+ subscribers. Andriy was previously the Machine Learning Lead at Talent Neuron and the Director of Data Science (ML) at Gartner. He has never been a Ukrainian footballer. </p><p>Related Resources </p><ul><li>The Hundred Page Language Models Book: <a href="https://thelmbook.com/">https://thelmbook.com/</a> </li><li>The Hundred Page Machine Learning Book: <a href="https://themlbook.com/">https://themlbook.com/</a>  </li><li>True Positive Weekly (newsletter): <a href="https://aiweekly.substack.com/">https://aiweekly.substack.com/</a> </li></ul><p> A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep73/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 11 Jun 2025 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/707c83a3/18457816.mp3" length="45147481" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/TDSV-7KgCDp1iwXH_jroa6mX0c0DGercJRGZujOEZvU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yMDkz/MGMzZGEyZWExYmY2/MGQ0NzYzZTBkODFl/MzIwYS5qcGc.jpg"/>
      <itunes:duration>2820</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/andriyburkov/">Andriy Burkov</a> talks down dishonest hype and sets realistic expectations for when LLMs, if properly and critically applied, are useful. Although maybe not as AI agents.  </p><p>Andriy and Kimberly discuss how he uses LLMs as an author; LLMs as unapologetic liars; how opaque training data impacts usability; not knowing if LLMs will save time or waste it; error-prone domains; when language fluency is useless; how expertise maximizes benefit; when some idea is better than no idea; limits of RAG; how LLMs go off the rails; why prompt engineering is not enough; using LLMs for rapid prototyping; and whether language models make good AI agents (in the strictest sense of the word). </p><p><a href="https://www.linkedin.com/in/andriyburkov/">Andriy Burkov</a> holds a PhD in Artificial Intelligence and is the author of The Hundred Page Machine Learning and Language Models books. His <a href="https://www.linkedin.com/newsletters/artificial-intelligence-6598352935271358464/">Artificial Intelligence Newsletter</a> reaches 870,000+ subscribers. Andriy was previously the Machine Learning Lead at Talent Neuron and the Director of Data Science (ML) at Gartner. He has never been a Ukrainian footballer. </p><p>Related Resources </p><ul><li>The Hundred Page Language Models Book: <a href="https://thelmbook.com/">https://thelmbook.com/</a> </li><li>The Hundred Page Machine Learning Book: <a href="https://themlbook.com/">https://themlbook.com/</a>  </li><li>True Positive Weekly (newsletter): <a href="https://aiweekly.substack.com/">https://aiweekly.substack.com/</a> </li></ul><p> A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep73/transcript">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/andriy-burkov-phd" img="https://img.transistorcdn.com/WZ0BAUvDMHD2uVDMcGIVLtFT6O1l7dNtMtjKtmAWrQo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83YTYx/ZjFiOWYzN2Q2Y2Q5/N2YyODNkNmU5MzFi/MGRkZS5qcGc.jpg">Andriy Burkov, PhD</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/707c83a3/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Reframing Responsible AI with Ravit Dotan</title>
      <itunes:episode>72</itunes:episode>
      <podcast:episode>72</podcast:episode>
      <itunes:title>Reframing Responsible AI with Ravit Dotan</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6dedf6e2-4f2d-4e30-a8eb-00ee6d807248</guid>
      <link>https://share.transistor.fm/s/96cb993b</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/ravit-dotan/">Ravit Dotan, PhD</a> asserts that beneficial AI adoption requires clarity of purpose, good judgment, ethical leadership, and making responsibility integral to innovation. </p><p><br>Ravit and Kimberly discuss the philosophy of science; why all algorithms incorporate values; how technical judgements centralize power; not exempting AI from established norms; when lists of risks lead us astray; wasting water, eating meat, and using AI responsibly; corporate ethics washing; patterns of ethical decoupling; reframing the relationship between responsibility and innovation; measuring what matters; and the next phase of ethical innovation in practice.  </p><p><br><a href="https://www.linkedin.com/in/ravit-dotan/">Ravit Dotan, PhD</a> is an AI ethics researcher and governance advisor on a mission to enable everyone to adopt AI the right way. The Founder and CEO of <a href="https://sasoffice365.sharepoint.com/sites/BrandActivation320/Shared%20Documents/GTM%20Workstream/Pondering%20AI%20Podcast/Season%206%20-%20Ongoing/Ep72%20Ravit%20Dotan/techbetter.ai">TechBetter</a>, Ravit holds a PhD in Philosophy from UC Berkeley and is a sought-after advisor on the topic of responsible innovation.</p><p>Related Resources</p><ul><li>The AI Treasure Chest (Substack): <a href="https://techbetter.substack.com/">https://techbetter.substack.com/</a></li><li>The Values Embedded in Machine Learning Research (Paper): <a href="https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533083">https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533083</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep72/transcript">here</a>.   </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/ravit-dotan/">Ravit Dotan, PhD</a> asserts that beneficial AI adoption requires clarity of purpose, good judgment, ethical leadership, and making responsibility integral to innovation. </p><p><br>Ravit and Kimberly discuss the philosophy of science; why all algorithms incorporate values; how technical judgements centralize power; not exempting AI from established norms; when lists of risks lead us astray; wasting water, eating meat, and using AI responsibly; corporate ethics washing; patterns of ethical decoupling; reframing the relationship between responsibility and innovation; measuring what matters; and the next phase of ethical innovation in practice.  </p><p><br><a href="https://www.linkedin.com/in/ravit-dotan/">Ravit Dotan, PhD</a> is an AI ethics researcher and governance advisor on a mission to enable everyone to adopt AI the right way. The Founder and CEO of <a href="https://sasoffice365.sharepoint.com/sites/BrandActivation320/Shared%20Documents/GTM%20Workstream/Pondering%20AI%20Podcast/Season%206%20-%20Ongoing/Ep72%20Ravit%20Dotan/techbetter.ai">TechBetter</a>, Ravit holds a PhD in Philosophy from UC Berkeley and is a sought-after advisor on the topic of responsible innovation.</p><p>Related Resources</p><ul><li>The AI Treasure Chest (Substack): <a href="https://techbetter.substack.com/">https://techbetter.substack.com/</a></li><li>The Values Embedded in Machine Learning Research (Paper): <a href="https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533083">https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533083</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep72/transcript">here</a>.   </p>]]>
      </content:encoded>
      <pubDate>Wed, 28 May 2025 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/96cb993b/3c0e8bc0.mp3" length="57548784" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/iedDtfy972Vvi-v5WPcB30URWQ4UFOrsEb28MB541Ao/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNGUx/OGRjYjg5OWE3MjQ0/NDE1YTBiNWY4YjJk/ZjdlMi5qcGc.jpg"/>
      <itunes:duration>3595</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/ravit-dotan/">Ravit Dotan, PhD</a> asserts that beneficial AI adoption requires clarity of purpose, good judgment, ethical leadership, and making responsibility integral to innovation. </p><p><br>Ravit and Kimberly discuss the philosophy of science; why all algorithms incorporate values; how technical judgements centralize power; not exempting AI from established norms; when lists of risks lead us astray; wasting water, eating meat, and using AI responsibly; corporate ethics washing; patterns of ethical decoupling; reframing the relationship between responsibility and innovation; measuring what matters; and the next phase of ethical innovation in practice.  </p><p><br><a href="https://www.linkedin.com/in/ravit-dotan/">Ravit Dotan, PhD</a> is an AI ethics researcher and governance advisor on a mission to enable everyone to adopt AI the right way. The Founder and CEO of <a href="https://sasoffice365.sharepoint.com/sites/BrandActivation320/Shared%20Documents/GTM%20Workstream/Pondering%20AI%20Podcast/Season%206%20-%20Ongoing/Ep72%20Ravit%20Dotan/techbetter.ai">TechBetter</a>, Ravit holds a PhD in Philosophy from UC Berkeley and is a sought-after advisor on the topic of responsible innovation.</p><p>Related Resources</p><ul><li>The AI Treasure Chest (Substack): <a href="https://techbetter.substack.com/">https://techbetter.substack.com/</a></li><li>The Values Embedded in Machine Learning Research (Paper): <a href="https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533083">https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533083</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep72/transcript">here</a>.   </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/ravit-dotan-phd" img="https://img.transistorcdn.com/b1rGET4mI5meHtZRy5A16Yy7i0fEhBPUmpuEUvRyUzs/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81Njli/MDQ5YTk4NzUyZDli/OTExOTVjNzUyZWJm/MDAwOS5qcGVn.jpg">Ravit Dotan, PhD</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/96cb993b/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Stories We Tech with Dr. Ash Watson</title>
      <itunes:episode>71</itunes:episode>
      <podcast:episode>71</podcast:episode>
      <itunes:title>Stories We Tech with Dr. Ash Watson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6fa7e9fd-35cf-469b-9ffb-aec85b63f2b2</guid>
      <link>https://share.transistor.fm/s/dda6cc61</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/awtsn/">Dr. Ash Watson</a> studies how stories ranging from classic Sci-Fi to modern tales invoking moral imperatives, dystopian futures and economic logic shape our views of AI. </p><p><br>Ash and Kimberly discuss the influence of old Sci-Fi on modern tech; why we can’t escape the stories we’re told; how technology shapes society; acting in ways a machine will understand; why the language we use matters; value transference from humans to AI systems; the promise of AI’s promise; grounding AI discourse in material realities; moral imperatives and capitalizing on crises; economic investment as social logic; AI’s claims to innovation; who innovation is really for; and positive developments in co-design and participatory research.  </p><p><br><a href="https://www.linkedin.com/in/awtsn/">Dr. Ash Watson</a> is a Scientia Fellow and Senior Lecturer at the Centre for Social Research in Health at UNSW Sydney. She is also an Affiliate of the Australian Research Council (<a href="https://www.arc.gov.au/">ARC</a>) Centre of Excellence for Automated Decision-Making and Society (CADMS). </p><p><br>Related Resources:</p><ul><li>Ash Watson (Website): <a href="https://awtsn.com/">https://awtsn.com/</a></li><li>The promise of artificial intelligence in health: Portrayals of emerging healthcare technologies (Article<strong>)</strong>: <a href="https://doi.org/10.1111/1467-9566.13840">https://doi.org/10.1111/1467-9566.13840</a></li><li>An imperative to innovate? Crisis in the sociotechnical imaginary (Article): <a href="https://doi.org/10.1016/j.tele.2024.102229">https://doi.org/10.1016/j.tele.2024.102229</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep71/transcript">here</a>.   </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/awtsn/">Dr. Ash Watson</a> studies how stories ranging from classic Sci-Fi to modern tales invoking moral imperatives, dystopian futures and economic logic shape our views of AI. </p><p><br>Ash and Kimberly discuss the influence of old Sci-Fi on modern tech; why we can’t escape the stories we’re told; how technology shapes society; acting in ways a machine will understand; why the language we use matters; value transference from humans to AI systems; the promise of AI’s promise; grounding AI discourse in material realities; moral imperatives and capitalizing on crises; economic investment as social logic; AI’s claims to innovation; who innovation is really for; and positive developments in co-design and participatory research.  </p><p><br><a href="https://www.linkedin.com/in/awtsn/">Dr. Ash Watson</a> is a Scientia Fellow and Senior Lecturer at the Centre for Social Research in Health at UNSW Sydney. She is also an Affiliate of the Australian Research Council (<a href="https://www.arc.gov.au/">ARC</a>) Centre of Excellence for Automated Decision-Making and Society (CADMS). </p><p><br>Related Resources:</p><ul><li>Ash Watson (Website): <a href="https://awtsn.com/">https://awtsn.com/</a></li><li>The promise of artificial intelligence in health: Portrayals of emerging healthcare technologies (Article<strong>)</strong>: <a href="https://doi.org/10.1111/1467-9566.13840">https://doi.org/10.1111/1467-9566.13840</a></li><li>An imperative to innovate? Crisis in the sociotechnical imaginary (Article): <a href="https://doi.org/10.1016/j.tele.2024.102229">https://doi.org/10.1016/j.tele.2024.102229</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep71/transcript">here</a>.   </p>]]>
      </content:encoded>
      <pubDate>Wed, 14 May 2025 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/dda6cc61/7d5a8562.mp3" length="45979620" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/jEMNwSKHWksxhGu6Oyndy988QRRlAioOImAlJpMZEa8/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81OGFk/ODA4NTNkOWQ2Y2Qy/NzdlYzUzOGE5MTc1/OGMwYi5qcGc.jpg"/>
      <itunes:duration>2872</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/awtsn/">Dr. Ash Watson</a> studies how stories ranging from classic Sci-Fi to modern tales invoking moral imperatives, dystopian futures and economic logic shape our views of AI. </p><p><br>Ash and Kimberly discuss the influence of old Sci-Fi on modern tech; why we can’t escape the stories we’re told; how technology shapes society; acting in ways a machine will understand; why the language we use matters; value transference from humans to AI systems; the promise of AI’s promise; grounding AI discourse in material realities; moral imperatives and capitalizing on crises; economic investment as social logic; AI’s claims to innovation; who innovation is really for; and positive developments in co-design and participatory research.  </p><p><br><a href="https://www.linkedin.com/in/awtsn/">Dr. Ash Watson</a> is a Scientia Fellow and Senior Lecturer at the Centre for Social Research in Health at UNSW Sydney. She is also an Affiliate of the Australian Research Council (<a href="https://www.arc.gov.au/">ARC</a>) Centre of Excellence for Automated Decision-Making and Society (CADMS). </p><p><br>Related Resources:</p><ul><li>Ash Watson (Website): <a href="https://awtsn.com/">https://awtsn.com/</a></li><li>The promise of artificial intelligence in health: Portrayals of emerging healthcare technologies (Article<strong>)</strong>: <a href="https://doi.org/10.1111/1467-9566.13840">https://doi.org/10.1111/1467-9566.13840</a></li><li>An imperative to innovate? Crisis in the sociotechnical imaginary (Article): <a href="https://doi.org/10.1016/j.tele.2024.102229">https://doi.org/10.1016/j.tele.2024.102229</a></li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep71/transcript">here</a>.   </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://awtsn.com/" img="https://img.transistorcdn.com/dYHPRsU3eVuFuQwdvEqPXVWkMt9ulDIOvsMLWeUse0Q/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wOTg2/ZDIwY2U1MTU2NjA2/NjJlOTllMmQ4Y2E4/MWIxOS5wbmc.jpg">Dr. Ash Watson</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/dda6cc61/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Regulating Addictive AI with Robert Mahari</title>
      <itunes:episode>70</itunes:episode>
      <podcast:episode>70</podcast:episode>
      <itunes:title>Regulating Addictive AI with Robert Mahari</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e97ea87d-5774-49f2-8c40-634f14f0f607</guid>
      <link>https://share.transistor.fm/s/47d7569f</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/robert-mahari/">Robert Mahari</a> examines the consequences of addictive intelligence, adaptive responses to regulating AI companions, and the benefits of interdisciplinary collaboration. </p><p>Robert and Kimberly discuss the attributes of addictive products; the allure of AI companions; AI as a prescription for loneliness; not assuming only the lonely are susceptible; regulatory constraints and gaps; individual rights and societal harms; adaptive guardrails and regulation by design; agentic self-awareness; why uncertainty doesn’t negate accountability; AI’s negative impact on the data commons; economic disincentives; interdisciplinary collaboration and future research.  </p><p><a href="https://www.linkedin.com/in/robert-mahari/">Robert Mahari</a> is a JD-PhD researcher at MIT Media Lab and the Harvard Law School where he studies the intersection of technology, law and business. In addition to computational law, Robert has a keen interest in AI regulation and embedding regulatory objectives and guardrails into AI designs. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep70/transcript">here</a>.   </p><p>Additional Resources:</p><ul><li>The Allure of Addictive Intelligence (article): <a href="https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/">https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/</a></li><li>Robert Mahari (website): <a href="https://robertmahari.com/">https://robertmahari.com/</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/robert-mahari/">Robert Mahari</a> examines the consequences of addictive intelligence, adaptive responses to regulating AI companions, and the benefits of interdisciplinary collaboration. </p><p>Robert and Kimberly discuss the attributes of addictive products; the allure of AI companions; AI as a prescription for loneliness; not assuming only the lonely are susceptible; regulatory constraints and gaps; individual rights and societal harms; adaptive guardrails and regulation by design; agentic self-awareness; why uncertainty doesn’t negate accountability; AI’s negative impact on the data commons; economic disincentives; interdisciplinary collaboration and future research.  </p><p><a href="https://www.linkedin.com/in/robert-mahari/">Robert Mahari</a> is a JD-PhD researcher at MIT Media Lab and the Harvard Law School where he studies the intersection of technology, law and business. In addition to computational law, Robert has a keen interest in AI regulation and embedding regulatory objectives and guardrails into AI designs. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep70/transcript">here</a>.   </p><p>Additional Resources:</p><ul><li>The Allure of Addictive Intelligence (article): <a href="https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/">https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/</a></li><li>Robert Mahari (website): <a href="https://robertmahari.com/">https://robertmahari.com/</a></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 16 Apr 2025 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/47d7569f/cefebbbd.mp3" length="52246137" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/5_Rm0ijYTgnsxKar4enzSLlb8E_CIbj16OQucpea6lI/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kYjlk/NTRiZTMwYzU4M2Fk/ZGQ0NTVhNDc3MTM3/ZTdkZS5qcGc.jpg"/>
      <itunes:duration>3264</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/robert-mahari/">Robert Mahari</a> examines the consequences of addictive intelligence, adaptive responses to regulating AI companions, and the benefits of interdisciplinary collaboration. </p><p>Robert and Kimberly discuss the attributes of addictive products; the allure of AI companions; AI as a prescription for loneliness; not assuming only the lonely are susceptible; regulatory constraints and gaps; individual rights and societal harms; adaptive guardrails and regulation by design; agentic self-awareness; why uncertainty doesn’t negate accountability; AI’s negative impact on the data commons; economic disincentives; interdisciplinary collaboration and future research.  </p><p><a href="https://www.linkedin.com/in/robert-mahari/">Robert Mahari</a> is a JD-PhD researcher at MIT Media Lab and the Harvard Law School where he studies the intersection of technology, law and business. In addition to computational law, Robert has a keen interest in AI regulation and embedding regulatory objectives and guardrails into AI designs. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep70/transcript">here</a>.   </p><p>Additional Resources:</p><ul><li>The Allure of Addictive Intelligence (article): <a href="https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/">https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/</a></li><li>Robert Mahari (website): <a href="https://robertmahari.com/">https://robertmahari.com/</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://robertmahari.com/" img="https://img.transistorcdn.com/oeeV1dL8GX7edpzGclIxrrVV8RkfyUD7UrHt3kFI7ZE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mMjE5/ZWQ1ZTY5NTI4MmM3/MmMyNmRiMDQzMDY3/OTQ4YS5qcGc.jpg">Robert Mahari</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/47d7569f/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>AI Literacy for All with Phaedra Boinodiris</title>
      <itunes:episode>69</itunes:episode>
      <podcast:episode>69</podcast:episode>
      <itunes:title>AI Literacy for All with Phaedra Boinodiris</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7c10bfb6-6640-48cb-bf66-c9d1d9f790c0</guid>
      <link>https://share.transistor.fm/s/8b5c6c4a</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/phaedra/">Phaedra Boinodiris</a> minds the gap between AI access and literacy by integrating educational siloes, practicing human-centric design, and cultivating critical consumers. </p><p>Phaedra and Kimberly discuss the dangerous confluence of broad AI accessibility with lagging AI literacy and accountability; coding as a bit player in AI design; data as an artifact of human experience; the need for holistic literacy; creating critical consumers; bringing everyone to the AI table; unlearning our siloed approach to education; multidisciplinary training; human-centricity in practice; why good intent isn’t enough; and the hard work required to develop good AI.  </p><p><a href="https://www.linkedin.com/in/phaedra/">Phaedra Boinodiris</a> is IBM’s Global Consulting Leader for Trustworthy AI and co-author of the book <a href="https://aifortherestofus.us/">AI for the Rest of Us</a>. As an RSA Fellow, co-founder of the Future World Alliance, and academic advisor, Phaedra is shaping a future in which AI is accessible and good for all. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep69/transcript">here</a>.    </p><p>Additional Resources: </p><p>Phaedra’s Website -  <a href="https://phaedra.ai/">https://phaedra.ai/</a> </p><p>The Future World Alliance - <a href="https://futureworldalliance.org/">https://futureworldalliance.org/</a> </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/phaedra/">Phaedra Boinodiris</a> minds the gap between AI access and literacy by integrating educational siloes, practicing human-centric design, and cultivating critical consumers. </p><p>Phaedra and Kimberly discuss the dangerous confluence of broad AI accessibility with lagging AI literacy and accountability; coding as a bit player in AI design; data as an artifact of human experience; the need for holistic literacy; creating critical consumers; bringing everyone to the AI table; unlearning our siloed approach to education; multidisciplinary training; human-centricity in practice; why good intent isn’t enough; and the hard work required to develop good AI.  </p><p><a href="https://www.linkedin.com/in/phaedra/">Phaedra Boinodiris</a> is IBM’s Global Consulting Leader for Trustworthy AI and co-author of the book <a href="https://aifortherestofus.us/">AI for the Rest of Us</a>. As an RSA Fellow, co-founder of the Future World Alliance, and academic advisor, Phaedra is shaping a future in which AI is accessible and good for all. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep69/transcript">here</a>.    </p><p>Additional Resources: </p><p>Phaedra’s Website -  <a href="https://phaedra.ai/">https://phaedra.ai/</a> </p><p>The Future World Alliance - <a href="https://futureworldalliance.org/">https://futureworldalliance.org/</a> </p>]]>
      </content:encoded>
      <pubDate>Wed, 02 Apr 2025 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/8b5c6c4a/26c7bc08.mp3" length="104206810" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/ewgNlC_8pJPwT4_A7lVyC_zCmLMNziTUkjcmWDvyLwA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNmE5/NjJlNzI4MTEyZWQz/MjQ3ZDQ0NzlmMWQ5/NzJkYi5qcGc.jpg"/>
      <itunes:duration>2604</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/phaedra/">Phaedra Boinodiris</a> minds the gap between AI access and literacy by integrating educational siloes, practicing human-centric design, and cultivating critical consumers. </p><p>Phaedra and Kimberly discuss the dangerous confluence of broad AI accessibility with lagging AI literacy and accountability; coding as a bit player in AI design; data as an artifact of human experience; the need for holistic literacy; creating critical consumers; bringing everyone to the AI table; unlearning our siloed approach to education; multidisciplinary training; human-centricity in practice; why good intent isn’t enough; and the hard work required to develop good AI.  </p><p><a href="https://www.linkedin.com/in/phaedra/">Phaedra Boinodiris</a> is IBM’s Global Consulting Leader for Trustworthy AI and co-author of the book <a href="https://aifortherestofus.us/">AI for the Rest of Us</a>. As an RSA Fellow, co-founder of the Future World Alliance, and academic advisor, Phaedra is shaping a future in which AI is accessible and good for all. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep69/transcript">here</a>.    </p><p>Additional Resources: </p><p>Phaedra’s Website -  <a href="https://phaedra.ai/">https://phaedra.ai/</a> </p><p>The Future World Alliance - <a href="https://futureworldalliance.org/">https://futureworldalliance.org/</a> </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/phaedra-boinodiris" img="https://img.transistorcdn.com/G8id3XSCCF9VuAMdcBmVIWEX44yc_J0Uib7zAkLaKV0/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lOWRl/M2YxOTJlMDJmNTdk/NTAwMjI3OTEzMzEy/NWZmZi5qcGVn.jpg">Phaedra Boinodiris</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/8b5c6c4a/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Auditing AI with Ryan Carrier </title>
      <itunes:episode>68</itunes:episode>
      <podcast:episode>68</podcast:episode>
      <itunes:title>Auditing AI with Ryan Carrier </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">51f0d530-4614-49f2-bdd6-a957109b1044</guid>
      <link>https://share.transistor.fm/s/65552c07</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/ryan-carrier-fhca-b286924/">Ryan Carrier</a> trues up the benefits and costs of responsible AI while debunking misleading narratives and underscoring the positive power of the consumer collective.  </p><p>Ryan and Kimberly discuss the growth of AI governance; predictable resistance; the (mis)belief that safety impedes innovation; the “cost of doing business”; downside and residual risk; unacceptable business practices; regulatory trends and the law; effective disclosures and deceptive design; the value of independence; auditing as a business asset; the AI lifecycle; ethical expertise and choice; ethics boards as advisors not activists; and voting for beneficial AI with our wallets.  </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep68/transcript">here</a>.    </p><p><a href="https://www.linkedin.com/in/ryan-carrier-fhca-b286924/">Ryan Carrier</a> is the Executive Director of <a href="https://forhumanity.center/">ForHumanity</a>, a non-profit organization improving AI outcomes through increased accountability and oversight. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/ryan-carrier-fhca-b286924/">Ryan Carrier</a> trues up the benefits and costs of responsible AI while debunking misleading narratives and underscoring the positive power of the consumer collective.  </p><p>Ryan and Kimberly discuss the growth of AI governance; predictable resistance; the (mis)belief that safety impedes innovation; the “cost of doing business”; downside and residual risk; unacceptable business practices; regulatory trends and the law; effective disclosures and deceptive design; the value of independence; auditing as a business asset; the AI lifecycle; ethical expertise and choice; ethics boards as advisors not activists; and voting for beneficial AI with our wallets.  </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep68/transcript">here</a>.    </p><p><a href="https://www.linkedin.com/in/ryan-carrier-fhca-b286924/">Ryan Carrier</a> is the Executive Director of <a href="https://forhumanity.center/">ForHumanity</a>, a non-profit organization improving AI outcomes through increased accountability and oversight. </p>]]>
      </content:encoded>
      <pubDate>Wed, 19 Mar 2025 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/65552c07/442726af.mp3" length="76602092" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/DuRspUbMLcbzu9YGD5UkfG3MMd-a4sND4MlrNIRxiHY/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81YWY2/Yjk5YzExMmMxMzBj/MjIyMGM4YTg2NDdi/NjIxMC5qcGc.jpg"/>
      <itunes:duration>3151</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/ryan-carrier-fhca-b286924/">Ryan Carrier</a> trues up the benefits and costs of responsible AI while debunking misleading narratives and underscoring the positive power of the consumer collective.  </p><p>Ryan and Kimberly discuss the growth of AI governance; predictable resistance; the (mis)belief that safety impedes innovation; the “cost of doing business”; downside and residual risk; unacceptable business practices; regulatory trends and the law; effective disclosures and deceptive design; the value of independence; auditing as a business asset; the AI lifecycle; ethical expertise and choice; ethics boards as advisors not activists; and voting for beneficial AI with our wallets.  </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep68/transcript">here</a>.    </p><p><a href="https://www.linkedin.com/in/ryan-carrier-fhca-b286924/">Ryan Carrier</a> is the Executive Director of <a href="https://forhumanity.center/">ForHumanity</a>, a non-profit organization improving AI outcomes through increased accountability and oversight. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/ryan-carrier" img="https://img.transistorcdn.com/UHW9cwTmwg4wvUCABCacRtvm7FEoqnxlbbGOzNpu8Gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGM4/NjZiMGM0NWY4OTRm/ZWMxMWY4YTQwYmYy/NDQ0OS5qcGc.jpg">Ryan Carrier</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/65552c07/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Ethical by Design with Olivia Gambelin</title>
      <itunes:episode>67</itunes:episode>
      <podcast:episode>67</podcast:episode>
      <itunes:title>Ethical by Design with Olivia Gambelin</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2c385403-3337-4422-b9a2-e27ec1cb75ce</guid>
      <link>https://share.transistor.fm/s/edd85e31</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/oliviagambelin/">Olivia Gambelin</a> values ethical innovation, revels in human creativity and curiosity, and advocates for AI systems that reflect and enable human values and objectives. </p><p>Olivia and Kimberly discuss philogagging; us vs. “them” (i.e. AI systems) comparisons; enabling curiosity and human values; being accountable for the bombs we build - figuratively speaking; AI models as the tip of the iceberg; literacy, values-based judgement and trust; replacing proclamations with strong living values; The Values Canvas; inspired innovations; falling back in love with technology; foundational risk practices; optimism and valuing what matters.  A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep67/transcript">here</a>. </p><p><a href="https://www.linkedin.com/in/oliviagambelin/">Olivia Gambelin</a> is a renowned AI Ethicist and the Founder of <a href="https://www.ethicalintelligence.co/">Ethical Intelligence</a>, the world’s largest network of Responsible AI practitioners. An active researcher, policy advisor and entrepreneur, Olivia helps executives and product teams innovate confidently with AI.  </p><p>Additional Resources: </p><p><a href="https://www.amazon.com/Responsible-AI-Implement-Approach-Organization/dp/1398615706/">Responsible AI: Implement an Ethical Approach in Your Organization</a> – Book</p><p><a href="https://www.amazon.com/Plato-Platypus-Walk-into-Understanding/dp/0143113879/">Plato &amp; a Platypus Walk Into a Bar: Understanding Philosophy Through Jokes</a> - Book   </p><p><a href="https://www.thevaluescanvas.com/about">The Values Canvas</a> – RAI Design Tool </p><p><a href="https://sheshapes.ai/">Women Shaping the Future of Responsible AI</a> – Organization </p><p><a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fpursuitofgoodtech.substack.com%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86Y2Q4NzFkYzhkNzhjYThlYTQyYmNmNDQ0YjAwNGViZTg6Nzo4ZWJkOjhkNTNmNWE1ZjcyNDc1ZDg4MGQ5ZTNiZGVjNmZhNDE1ZDRlNGY1ZDg1NTc5MTFlNjJiYTEzMzJhY2U2OWQxYmI6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cc69c6ac2de004e762b6408dd57fd1cd1%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638763466835708470%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=JMopXxy3FZQ%2FT7HErIDOpA69O6kG%2B0jDbZw4voblyZI%3D&amp;reserved=0">In Pursuit of Good Tech | Subscribe</a> - Newsletter</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/oliviagambelin/">Olivia Gambelin</a> values ethical innovation, revels in human creativity and curiosity, and advocates for AI systems that reflect and enable human values and objectives. </p><p>Olivia and Kimberly discuss philogagging; us vs. “them” (i.e. AI systems) comparisons; enabling curiosity and human values; being accountable for the bombs we build - figuratively speaking; AI models as the tip of the iceberg; literacy, values-based judgement and trust; replacing proclamations with strong living values; The Values Canvas; inspired innovations; falling back in love with technology; foundational risk practices; optimism and valuing what matters.  A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep67/transcript">here</a>. </p><p><a href="https://www.linkedin.com/in/oliviagambelin/">Olivia Gambelin</a> is a renowned AI Ethicist and the Founder of <a href="https://www.ethicalintelligence.co/">Ethical Intelligence</a>, the world’s largest network of Responsible AI practitioners. An active researcher, policy advisor and entrepreneur, Olivia helps executives and product teams innovate confidently with AI.  </p><p>Additional Resources: </p><p><a href="https://www.amazon.com/Responsible-AI-Implement-Approach-Organization/dp/1398615706/">Responsible AI: Implement an Ethical Approach in Your Organization</a> – Book</p><p><a href="https://www.amazon.com/Plato-Platypus-Walk-into-Understanding/dp/0143113879/">Plato &amp; a Platypus Walk Into a Bar: Understanding Philosophy Through Jokes</a> - Book   </p><p><a href="https://www.thevaluescanvas.com/about">The Values Canvas</a> – RAI Design Tool </p><p><a href="https://sheshapes.ai/">Women Shaping the Future of Responsible AI</a> – Organization </p><p><a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fpursuitofgoodtech.substack.com%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86Y2Q4NzFkYzhkNzhjYThlYTQyYmNmNDQ0YjAwNGViZTg6Nzo4ZWJkOjhkNTNmNWE1ZjcyNDc1ZDg4MGQ5ZTNiZGVjNmZhNDE1ZDRlNGY1ZDg1NTc5MTFlNjJiYTEzMzJhY2U2OWQxYmI6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cc69c6ac2de004e762b6408dd57fd1cd1%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638763466835708470%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=JMopXxy3FZQ%2FT7HErIDOpA69O6kG%2B0jDbZw4voblyZI%3D&amp;reserved=0">In Pursuit of Good Tech | Subscribe</a> - Newsletter</p>]]>
      </content:encoded>
      <pubDate>Wed, 05 Mar 2025 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/edd85e31/9fee1941.mp3" length="74766878" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/bXzpd41HUtbz7IAD4kF79P9eS4kgTj-kQtRionyHFvA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85OWQ3/NGYwYjQ2YzFkMDBh/OGNlMTIxNzFiNzcw/ODU5NS5qcGc.jpg"/>
      <itunes:duration>3086</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/oliviagambelin/">Olivia Gambelin</a> values ethical innovation, revels in human creativity and curiosity, and advocates for AI systems that reflect and enable human values and objectives. </p><p>Olivia and Kimberly discuss philogagging; us vs. “them” (i.e. AI systems) comparisons; enabling curiosity and human values; being accountable for the bombs we build - figuratively speaking; AI models as the tip of the iceberg; literacy, values-based judgement and trust; replacing proclamations with strong living values; The Values Canvas; inspired innovations; falling back in love with technology; foundational risk practices; optimism and valuing what matters.  A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep67/transcript">here</a>. </p><p><a href="https://www.linkedin.com/in/oliviagambelin/">Olivia Gambelin</a> is a renowned AI Ethicist and the Founder of <a href="https://www.ethicalintelligence.co/">Ethical Intelligence</a>, the world’s largest network of Responsible AI practitioners. An active researcher, policy advisor and entrepreneur, Olivia helps executives and product teams innovate confidently with AI.  </p><p>Additional Resources: </p><p><a href="https://www.amazon.com/Responsible-AI-Implement-Approach-Organization/dp/1398615706/">Responsible AI: Implement an Ethical Approach in Your Organization</a> – Book</p><p><a href="https://www.amazon.com/Plato-Platypus-Walk-into-Understanding/dp/0143113879/">Plato &amp; a Platypus Walk Into a Bar: Understanding Philosophy Through Jokes</a> - Book   </p><p><a href="https://www.thevaluescanvas.com/about">The Values Canvas</a> – RAI Design Tool </p><p><a href="https://sheshapes.ai/">Women Shaping the Future of Responsible AI</a> – Organization </p><p><a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fpursuitofgoodtech.substack.com%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86Y2Q4NzFkYzhkNzhjYThlYTQyYmNmNDQ0YjAwNGViZTg6Nzo4ZWJkOjhkNTNmNWE1ZjcyNDc1ZDg4MGQ5ZTNiZGVjNmZhNDE1ZDRlNGY1ZDg1NTc5MTFlNjJiYTEzMzJhY2U2OWQxYmI6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cc69c6ac2de004e762b6408dd57fd1cd1%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638763466835708470%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=JMopXxy3FZQ%2FT7HErIDOpA69O6kG%2B0jDbZw4voblyZI%3D&amp;reserved=0">In Pursuit of Good Tech | Subscribe</a> - Newsletter</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/olivia-gambelin" img="https://img.transistorcdn.com/NRsPUoJ9c6SgFIhJVQAVlut5ePzUiXfKPOlpMxADFcY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81NTNl/MmYxNDhmZDQwNzE3/NDhkOGM3NmJjZWE2/MjgxNi5qcGVn.jpg">Olivia Gambelin</podcast:person>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/edd85e31/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Nature of Learning with Helen Beetham </title>
      <itunes:episode>66</itunes:episode>
      <podcast:episode>66</podcast:episode>
      <itunes:title>The Nature of Learning with Helen Beetham </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6894bb87-020f-41ff-8ab7-a706e7688967</guid>
      <link>https://share.transistor.fm/s/7908e8f8</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/helen-beetham/">Helen Beetham</a> isn’t waiting for an AI upgrade as she considers what higher education is for, why learning is ostensibly ripe for AI, and how to diversify our course.     </p><p>Helen and Kimberly discuss the purpose of higher education; the current two tribe moment; systemic effects of AI; rethinking learning; GenAI affordances; the expertise paradox; productive developmental challenges; converging on an educational norm; teachers as data laborers; the data-driven personalization myth; US edtech and instrumental pedagogy; the fantasy of AI’s teacherly behavior; students as actors in their learning; critical digital literacy; a story of future education; AI ready graduates; pre-automation and AI adoption; diversity of expression and knowledge; two-tiered educational systems; and the rich heritage of universities.</p><p><a href="https://www.linkedin.com/in/helen-beetham/">Helen Beetham</a> is an educator, researcher and consultant who advises universities and international bodies worldwide on their digital education strategies. Helen is also a prolific author whose publications include “Rethinking Pedagogy for a Digital Age”. Her Substack, <a href="https://helenbeetham.substack.com/">Imperfect Offerings</a>, is recommended by the <em>Guardian/Observer</em> for its wise and thoughtful critique of generative AI.   </p><p>Additional Resources:</p><p>Imperfect Offerings - <a href="https://helenbeetham.substack.com/">https://helenbeetham.substack.com/</a></p><p>Audrey Watters - <a href="https://audreywatters.com/">https://audreywatters.com/</a> </p><p>Kathryn (Katie) Conrad - <a href="https://www.linkedin.com/in/kathryn-katie-conrad-1b0749b/">https://www.linkedin.com/in/kathryn-katie-conrad-1b0749b/</a> </p><p>Anna Mills - <a href="https://www.linkedin.com/in/anna-mills-oer/">https://www.linkedin.com/in/anna-mills-oer/</a> </p><p>Dr. Maya Indira Ganesh - <a href="https://www.linkedin.com/in/dr-des-maya-indira-ganesh/">https://www.linkedin.com/in/dr-des-maya-indira-ganesh/</a> </p><p>Tech(nically) Politics - <a href="https://www.technicallypolitics.org/">https://www.technicallypolitics.org/</a> </p><p>LOG OFF - <a href="http://www.logoffmovement.org/">logoffmovement.org/</a> </p><p>Rest of World -  <a href="http://www.restofworld.org/">www.restofworld.org/</a></p><p>Derechos Digitales – <a href="http://www.derechosdigitales.org">www.derechosdigitales.org</a> </p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep66/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/helen-beetham/">Helen Beetham</a> isn’t waiting for an AI upgrade as she considers what higher education is for, why learning is ostensibly ripe for AI, and how to diversify our course.     </p><p>Helen and Kimberly discuss the purpose of higher education; the current two tribe moment; systemic effects of AI; rethinking learning; GenAI affordances; the expertise paradox; productive developmental challenges; converging on an educational norm; teachers as data laborers; the data-driven personalization myth; US edtech and instrumental pedagogy; the fantasy of AI’s teacherly behavior; students as actors in their learning; critical digital literacy; a story of future education; AI ready graduates; pre-automation and AI adoption; diversity of expression and knowledge; two-tiered educational systems; and the rich heritage of universities.</p><p><a href="https://www.linkedin.com/in/helen-beetham/">Helen Beetham</a> is an educator, researcher and consultant who advises universities and international bodies worldwide on their digital education strategies. Helen is also a prolific author whose publications include “Rethinking Pedagogy for a Digital Age”. Her Substack, <a href="https://helenbeetham.substack.com/">Imperfect Offerings</a>, is recommended by the <em>Guardian/Observer</em> for its wise and thoughtful critique of generative AI.   </p><p>Additional Resources:</p><p>Imperfect Offerings - <a href="https://helenbeetham.substack.com/">https://helenbeetham.substack.com/</a></p><p>Audrey Watters - <a href="https://audreywatters.com/">https://audreywatters.com/</a> </p><p>Kathryn (Katie) Conrad - <a href="https://www.linkedin.com/in/kathryn-katie-conrad-1b0749b/">https://www.linkedin.com/in/kathryn-katie-conrad-1b0749b/</a> </p><p>Anna Mills - <a href="https://www.linkedin.com/in/anna-mills-oer/">https://www.linkedin.com/in/anna-mills-oer/</a> </p><p>Dr. Maya Indira Ganesh - <a href="https://www.linkedin.com/in/dr-des-maya-indira-ganesh/">https://www.linkedin.com/in/dr-des-maya-indira-ganesh/</a> </p><p>Tech(nically) Politics - <a href="https://www.technicallypolitics.org/">https://www.technicallypolitics.org/</a> </p><p>LOG OFF - <a href="http://www.logoffmovement.org/">logoffmovement.org/</a> </p><p>Rest of World -  <a href="http://www.restofworld.org/">www.restofworld.org/</a></p><p>Derechos Digitales – <a href="http://www.derechosdigitales.org">www.derechosdigitales.org</a> </p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep66/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 19 Feb 2025 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/7908e8f8/5342670d.mp3" length="66521205" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/lmE8Dc7zJAgyhohBIz5cBvvmtx_cDNcxn16blHyoZHA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMmFj/OWZhZjcyOWYxNmM1/ZmU5NmVjMzQwOWVl/NGJkMS5qcGc.jpg"/>
      <itunes:duration>2757</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/helen-beetham/">Helen Beetham</a> isn’t waiting for an AI upgrade as she considers what higher education is for, why learning is ostensibly ripe for AI, and how to diversify our course.     </p><p>Helen and Kimberly discuss the purpose of higher education; the current two tribe moment; systemic effects of AI; rethinking learning; GenAI affordances; the expertise paradox; productive developmental challenges; converging on an educational norm; teachers as data laborers; the data-driven personalization myth; US edtech and instrumental pedagogy; the fantasy of AI’s teacherly behavior; students as actors in their learning; critical digital literacy; a story of future education; AI ready graduates; pre-automation and AI adoption; diversity of expression and knowledge; two-tiered educational systems; and the rich heritage of universities.</p><p><a href="https://www.linkedin.com/in/helen-beetham/">Helen Beetham</a> is an educator, researcher and consultant who advises universities and international bodies worldwide on their digital education strategies. Helen is also a prolific author whose publications include “Rethinking Pedagogy for a Digital Age”. Her Substack, <a href="https://helenbeetham.substack.com/">Imperfect Offerings</a>, is recommended by the <em>Guardian/Observer</em> for its wise and thoughtful critique of generative AI.   </p><p>Additional Resources:</p><p>Imperfect Offerings - <a href="https://helenbeetham.substack.com/">https://helenbeetham.substack.com/</a></p><p>Audrey Watters - <a href="https://audreywatters.com/">https://audreywatters.com/</a> </p><p>Kathryn (Katie) Conrad - <a href="https://www.linkedin.com/in/kathryn-katie-conrad-1b0749b/">https://www.linkedin.com/in/kathryn-katie-conrad-1b0749b/</a> </p><p>Anna Mills - <a href="https://www.linkedin.com/in/anna-mills-oer/">https://www.linkedin.com/in/anna-mills-oer/</a> </p><p>Dr. Maya Indira Ganesh - <a href="https://www.linkedin.com/in/dr-des-maya-indira-ganesh/">https://www.linkedin.com/in/dr-des-maya-indira-ganesh/</a> </p><p>Tech(nically) Politics - <a href="https://www.technicallypolitics.org/">https://www.technicallypolitics.org/</a> </p><p>LOG OFF - <a href="http://www.logoffmovement.org/">logoffmovement.org/</a> </p><p>Rest of World -  <a href="http://www.restofworld.org/">www.restofworld.org/</a></p><p>Derechos Digitales – <a href="http://www.derechosdigitales.org">www.derechosdigitales.org</a> </p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep66/transcript">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/helen-beetham" img="https://img.transistorcdn.com/chblF8YVMeHNXbka6kJhvADcwiNtbEyv2yyWd4egEXM/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83MjEy/YzMxZjIwYzZjMzUx/YzE3NTU0YjAwMDAz/Y2FlZi5qcGVn.jpg">Helen Beetham</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/7908e8f8/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Ethics for Engineers with Steven Kelts</title>
      <itunes:episode>65</itunes:episode>
      <podcast:episode>65</podcast:episode>
      <itunes:title>Ethics for Engineers with Steven Kelts</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b04cc041-ae0b-4d4a-a3d3-846eae432ddd</guid>
      <link>https://share.transistor.fm/s/010e34d9</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/steven-kelts/">Steven Kelts</a> engages engineers in ethical choice, enlivens training with role-playing, exposes organizational hazards and separates moral qualms from a duty to care. </p><p>Steven and Kimberly discuss Ashley Casovan’s inspiring query; the affirmation allusion; students as stochastic parrots; when ethical sophistication backfires; limits of ethics review boards; engineers and developers as core to ethical design; assuming people are good; 4 steps of ethical decision making; inadvertent hotdog theft; organizational disincentives; simulation and role-playing in ethical training; avoiding cognitive overload; reorienting ethical responsibility; guns, ethical qualms and care; and empowering engineers to make ethical choices.</p><p><a href="https://www.linkedin.com/in/steven-kelts/">Steven Kelts</a> is a lecturer in Princeton’s University Center for Human Values (UCHV) and affiliated faculty in the Center for Information Technology Policy (CITP). Steve is also an ethics advisor to the Responsible AI Institute and Director of All Tech is Human’s <a href="https://alltechishuman.org/responsible-tech-university-network">Responsible University Network</a>.</p><p> </p><p>Additional Resources:</p><ul><li>Princeton Agile Ethics Program: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fagile-ethics.princeton.edu%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NmViMTA0Mjc1MTE3YzFhN2ZjMDc0ZWY4N2Q5Mjc4MTc6NzoxMWJiOjdiY2Y3MDJmNThiYTJjNjczYjhiZjE0YmYyYjE5MzA3ODU3YTk4YmIzMWIyMDc1YWY0MmQwYzU1YzY1YjNmYzQ6aDpGOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C591cbccc1f124ccd129308dd18754f49%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638693614327969502%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=YcaS%2BGglgkUfiVFziwdncmDezveiIvFNTTkKwZxU1rk%3D&amp;reserved=0">https://agile-ethics.princeton.edu</a></li><li>CITP Talk 11/19/24: <a href="https://youtu.be/ulFKTHzfv9s?si=aGVuo0SbKi5ZHGTK">Agile Ethics Theory and Evidence</a></li><li>Oktar, Lomborozo et al: <a href="https://www.sciencedirect.com/science/article/abs/pii/S0010027723000689?via%3Dihub">Changing Moral Judgements</a></li><li>4-Stage Theory of Ethical Decision Making: <a href="https://www.researchgate.net/publication/313099978_The_four_components_of_acting_morally_Moral_behavior_and_moral_development_An_introduction">An Introduction</a></li><li>Enabling Engineers through “<a href="https://arxiv.org/abs/2306.06901">Moral Imagination</a>” (Google)</li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep65/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/steven-kelts/">Steven Kelts</a> engages engineers in ethical choice, enlivens training with role-playing, exposes organizational hazards and separates moral qualms from a duty to care. </p><p>Steven and Kimberly discuss Ashley Casovan’s inspiring query; the affirmation allusion; students as stochastic parrots; when ethical sophistication backfires; limits of ethics review boards; engineers and developers as core to ethical design; assuming people are good; 4 steps of ethical decision making; inadvertent hotdog theft; organizational disincentives; simulation and role-playing in ethical training; avoiding cognitive overload; reorienting ethical responsibility; guns, ethical qualms and care; and empowering engineers to make ethical choices.</p><p><a href="https://www.linkedin.com/in/steven-kelts/">Steven Kelts</a> is a lecturer in Princeton’s University Center for Human Values (UCHV) and affiliated faculty in the Center for Information Technology Policy (CITP). Steve is also an ethics advisor to the Responsible AI Institute and Director of All Tech is Human’s <a href="https://alltechishuman.org/responsible-tech-university-network">Responsible University Network</a>.</p><p> </p><p>Additional Resources:</p><ul><li>Princeton Agile Ethics Program: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fagile-ethics.princeton.edu%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NmViMTA0Mjc1MTE3YzFhN2ZjMDc0ZWY4N2Q5Mjc4MTc6NzoxMWJiOjdiY2Y3MDJmNThiYTJjNjczYjhiZjE0YmYyYjE5MzA3ODU3YTk4YmIzMWIyMDc1YWY0MmQwYzU1YzY1YjNmYzQ6aDpGOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C591cbccc1f124ccd129308dd18754f49%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638693614327969502%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=YcaS%2BGglgkUfiVFziwdncmDezveiIvFNTTkKwZxU1rk%3D&amp;reserved=0">https://agile-ethics.princeton.edu</a></li><li>CITP Talk 11/19/24: <a href="https://youtu.be/ulFKTHzfv9s?si=aGVuo0SbKi5ZHGTK">Agile Ethics Theory and Evidence</a></li><li>Oktar, Lomborozo et al: <a href="https://www.sciencedirect.com/science/article/abs/pii/S0010027723000689?via%3Dihub">Changing Moral Judgements</a></li><li>4-Stage Theory of Ethical Decision Making: <a href="https://www.researchgate.net/publication/313099978_The_four_components_of_acting_morally_Moral_behavior_and_moral_development_An_introduction">An Introduction</a></li><li>Enabling Engineers through “<a href="https://arxiv.org/abs/2306.06901">Moral Imagination</a>” (Google)</li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep65/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 05 Feb 2025 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/010e34d9/b16e419c.mp3" length="68091592" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/TZ4WITY9pnwG-CyUpWPa6S_pGcLM8jwPc1ZrV0h3TAg/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80ZDk4/YzJjZDUyZTJiM2I4/NTQzYmJhNmY3ODBm/MjhjNy5qcGc.jpg"/>
      <itunes:duration>2805</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/steven-kelts/">Steven Kelts</a> engages engineers in ethical choice, enlivens training with role-playing, exposes organizational hazards and separates moral qualms from a duty to care. </p><p>Steven and Kimberly discuss Ashley Casovan’s inspiring query; the affirmation allusion; students as stochastic parrots; when ethical sophistication backfires; limits of ethics review boards; engineers and developers as core to ethical design; assuming people are good; 4 steps of ethical decision making; inadvertent hotdog theft; organizational disincentives; simulation and role-playing in ethical training; avoiding cognitive overload; reorienting ethical responsibility; guns, ethical qualms and care; and empowering engineers to make ethical choices.</p><p><a href="https://www.linkedin.com/in/steven-kelts/">Steven Kelts</a> is a lecturer in Princeton’s University Center for Human Values (UCHV) and affiliated faculty in the Center for Information Technology Policy (CITP). Steve is also an ethics advisor to the Responsible AI Institute and Director of All Tech is Human’s <a href="https://alltechishuman.org/responsible-tech-university-network">Responsible University Network</a>.</p><p> </p><p>Additional Resources:</p><ul><li>Princeton Agile Ethics Program: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fagile-ethics.princeton.edu%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NmViMTA0Mjc1MTE3YzFhN2ZjMDc0ZWY4N2Q5Mjc4MTc6NzoxMWJiOjdiY2Y3MDJmNThiYTJjNjczYjhiZjE0YmYyYjE5MzA3ODU3YTk4YmIzMWIyMDc1YWY0MmQwYzU1YzY1YjNmYzQ6aDpGOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C591cbccc1f124ccd129308dd18754f49%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638693614327969502%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=YcaS%2BGglgkUfiVFziwdncmDezveiIvFNTTkKwZxU1rk%3D&amp;reserved=0">https://agile-ethics.princeton.edu</a></li><li>CITP Talk 11/19/24: <a href="https://youtu.be/ulFKTHzfv9s?si=aGVuo0SbKi5ZHGTK">Agile Ethics Theory and Evidence</a></li><li>Oktar, Lomborozo et al: <a href="https://www.sciencedirect.com/science/article/abs/pii/S0010027723000689?via%3Dihub">Changing Moral Judgements</a></li><li>4-Stage Theory of Ethical Decision Making: <a href="https://www.researchgate.net/publication/313099978_The_four_components_of_acting_morally_Moral_behavior_and_moral_development_An_introduction">An Introduction</a></li><li>Enabling Engineers through “<a href="https://arxiv.org/abs/2306.06901">Moral Imagination</a>” (Google)</li></ul><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep65/transcript">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/steven-kelts" img="https://img.transistorcdn.com/PeeaP2cgqvu-r-SJNwHV9z2vGDDs7TQ2pXBqB38mnQs/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMTdh/NTVlMWQ3MWI5Y2U2/YmIwMjI3MGZkZGVm/MjU3OC5qcGc.jpg">Steven Kelts</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/010e34d9/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Righting AI with Susie Alegre</title>
      <itunes:episode>64</itunes:episode>
      <podcast:episode>64</podcast:episode>
      <itunes:title>Righting AI with Susie Alegre</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a9452ca8-dc5c-4368-afc9-eabff0d2a963</guid>
      <link>https://share.transistor.fm/s/454e8fd3</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/susie-alegre-2141b7337/">Susie Alegre</a> makes the case for prioritizing human rights and connection, taking AI systems to account, minding the right gaps, and resisting unwitting AI dependency.  </p><p>Susie and Kimberly discuss the Universal Declaration of Human Rights (UDHR); legal protections and access to justice; human rights laws; how court cases impact legislative will; the wicked problem of companion AI; abdicating accountability for AI systems; Stepford Wives and gynoid robots; human connection and agency; minding the wrong gaps with AI systems; AI dogs vs. AI pooper scoopers; the reality of care and legal work; writing to think; cultural heritage and creativity; pausing for thought; unwittingly becoming dependent on AI; and prioritizing people over technology. </p><p> </p><p><a href="https://www.linkedin.com/in/susie-alegre-2141b7337/">Susie Alegre</a> is an acclaimed international human rights lawyer and the author of <a href="https://www.amazon.com/Freedom-Think-Struggle-Liberate-Minds/dp/1838951520">Freedom to Think: The Long Struggle to Liberate Our Minds</a> and <a href="https://www.amazon.com/Human-Rights-Robot-Wrongs-Being/dp/B0D96HKP18">Human Rights, Robot Wrongs: Being Human in the Age of AI</a>. She is also a Senior Fellow at the Centre for International Governance and Innovation (<a href="https://www.cigionline.org/people/susie-alegre/">CIGI</a>) and Founder of the <a href="http://www.islandrights.org/">Island Rights Initiative</a>. Learn more at her website: <a href="https://susiealegre.com/">Susie Alegre</a>  </p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep64/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/susie-alegre-2141b7337/">Susie Alegre</a> makes the case for prioritizing human rights and connection, taking AI systems to account, minding the right gaps, and resisting unwitting AI dependency.  </p><p>Susie and Kimberly discuss the Universal Declaration of Human Rights (UDHR); legal protections and access to justice; human rights laws; how court cases impact legislative will; the wicked problem of companion AI; abdicating accountability for AI systems; Stepford Wives and gynoid robots; human connection and agency; minding the wrong gaps with AI systems; AI dogs vs. AI pooper scoopers; the reality of care and legal work; writing to think; cultural heritage and creativity; pausing for thought; unwittingly becoming dependent on AI; and prioritizing people over technology. </p><p> </p><p><a href="https://www.linkedin.com/in/susie-alegre-2141b7337/">Susie Alegre</a> is an acclaimed international human rights lawyer and the author of <a href="https://www.amazon.com/Freedom-Think-Struggle-Liberate-Minds/dp/1838951520">Freedom to Think: The Long Struggle to Liberate Our Minds</a> and <a href="https://www.amazon.com/Human-Rights-Robot-Wrongs-Being/dp/B0D96HKP18">Human Rights, Robot Wrongs: Being Human in the Age of AI</a>. She is also a Senior Fellow at the Centre for International Governance and Innovation (<a href="https://www.cigionline.org/people/susie-alegre/">CIGI</a>) and Founder of the <a href="http://www.islandrights.org/">Island Rights Initiative</a>. Learn more at her website: <a href="https://susiealegre.com/">Susie Alegre</a>  </p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep64/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 22 Jan 2025 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/454e8fd3/832d693b.mp3" length="67380831" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/jpee-t77yiqFASjq0layHDU5D6PQOKLge3ruLYv03Rc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82MjFi/NzBhMmI2ZjBmOTFm/ODM0OTM2MzE1N2Rk/YjkwMS5qcGc.jpg"/>
      <itunes:duration>2772</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/susie-alegre-2141b7337/">Susie Alegre</a> makes the case for prioritizing human rights and connection, taking AI systems to account, minding the right gaps, and resisting unwitting AI dependency.  </p><p>Susie and Kimberly discuss the Universal Declaration of Human Rights (UDHR); legal protections and access to justice; human rights laws; how court cases impact legislative will; the wicked problem of companion AI; abdicating accountability for AI systems; Stepford Wives and gynoid robots; human connection and agency; minding the wrong gaps with AI systems; AI dogs vs. AI pooper scoopers; the reality of care and legal work; writing to think; cultural heritage and creativity; pausing for thought; unwittingly becoming dependent on AI; and prioritizing people over technology. </p><p> </p><p><a href="https://www.linkedin.com/in/susie-alegre-2141b7337/">Susie Alegre</a> is an acclaimed international human rights lawyer and the author of <a href="https://www.amazon.com/Freedom-Think-Struggle-Liberate-Minds/dp/1838951520">Freedom to Think: The Long Struggle to Liberate Our Minds</a> and <a href="https://www.amazon.com/Human-Rights-Robot-Wrongs-Being/dp/B0D96HKP18">Human Rights, Robot Wrongs: Being Human in the Age of AI</a>. She is also a Senior Fellow at the Centre for International Governance and Innovation (<a href="https://www.cigionline.org/people/susie-alegre/">CIGI</a>) and Founder of the <a href="http://www.islandrights.org/">Island Rights Initiative</a>. Learn more at her website: <a href="https://susiealegre.com/">Susie Alegre</a>  </p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep64/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/susie-alegre" img="https://img.transistorcdn.com/Pfb-Uk4K9sdkkOdVHm3ErtqRODto_TcWUglkwAJYYSQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80MDFh/OTIwNGRkZTg5YWM5/ZjM3MmRhN2JiNDYw/YmE4OS5qcGc.jpg">Susie Alegre</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/454e8fd3/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>AI Myths and Mythos with Eryk Salvaggio</title>
      <itunes:episode>63</itunes:episode>
      <podcast:episode>63</podcast:episode>
      <itunes:title>AI Myths and Mythos with Eryk Salvaggio</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">94efe1c3-f616-4359-bdc6-2fa6077be39b</guid>
      <link>https://share.transistor.fm/s/ad54320e</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/eryk-salvaggio/">Eryk Salvaggio</a> articulates myths animating AI design, illustrates the nature of creativity and generated media, and artfully reframes the discourse on GenAI and art.   </p><p>Eryk joined Kimberly to discuss myths and metaphors in GenAI design; the illusion of control; if AI saves time and what for; not relying on futuristic AI to solve problems; the fallacy of scale; the dehumanizing narrative of human equivalence; positive biases toward AI; why asking ‘is the machine creative’ misses the mark; creative expression and meaning making; what AI generated art represents; distinguishing archives from datasets; curation as an act of care; representation and context in generated media; the Orwellian view of mass surveillance as anonymity; complicity and critique of GenAI tools; abstraction and noise; and what we aren’t doing when we use GenAI. </p><p> </p><p><a href="https://www.linkedin.com/in/eryk-salvaggio/">Eryk Salvaggio</a> is a new media artist, Visiting Professor in Humanities, Computing and Design at the Rochester Institute of Technology, and an Emerging Technology Research Advisor at the <a href="https://www.siegelendowment.org/">Siegel Family Endowment</a>. Eryk is also a researcher on the <a href="https://aipedagogy.org/">AI Pedagogies Project</a> at Harvard University’s <a href="https://cyber.harvard.edu/research/metalab">metaLab</a> and lecturer on Responsible AI at Elisava Barcelona School of Design and Engineering.  </p><p> </p><p>Addition Resources:  </p><p>Cybernetic Forests:  <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___http%3A%2F%2Fmail.cyberneticforests.com___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NDFmYmFlMWY4ODFmMjNjZTJjNzljMzJmZTg4NmU2ZTA6Nzo4NTIxOjUyZGNlYmRkYTEzNGU1NjE4MDZlNjJlNDMzZGY4YTQ1OTM3OWUzNmY3MTcyMjZjMTlmNTk3Y2ZkZTg0NTFkOGQ6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cae1b6556de48483e36a708dd2b4245de%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638714286023790340%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=iiyNikllM0s30xMLwCP4Z0reJpBXvnSPw%2Br%2F07arM8g%3D&amp;reserved=0">mail.cyberneticforests.com</a> </p><p>The Age of Noise: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fmail.cyberneticforests.com%2Fthe-age-of-noise%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NDFmYmFlMWY4ODFmMjNjZTJjNzljMzJmZTg4NmU2ZTA6Nzo2ZDhjOjU3N2NhMzVkNmZkYzU0M2QwNjBiNTExZDZlMzBmMTVhYzAzOTJjOWQ3ZTdmODU1MmY5YTQ4Mjg5ZjYwNTNkMzg6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cae1b6556de48483e36a708dd2b4245de%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638714286023811447%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=Hete%2BLZiaR85VefbaP41Q2MdeyibFNguKw0IZCEfSoU%3D&amp;reserved=0">https://mail.cyberneticforests.com/the-age-of-noise/</a> </p><p>Challenging the Myths of Generative AI: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.techpolicy.press%2Fchallenging-the-myths-of-generative-ai%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NDFmYmFlMWY4ODFmMjNjZTJjNzljMzJmZTg4NmU2ZTA6Nzo5N2U5OmEwM2MzNjMwM2M4YzY0YTEwNWU3MjlhNWJkNzg0NWNlODc5ZDlkNDQxMDdmYzYwNmEzZWU1ZTZhYTY2NWY3YzI6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cae1b6556de48483e36a708dd2b4245de%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638714286023821313%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=%2BGD26ugwXM8P9IIPD33EVLDIV01Vm86peZQz9kbWKwQ%3D&amp;reserved=0">https://www.techpolicy.press/challenging-the-myths-of-generative-ai/</a> </p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep63/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/eryk-salvaggio/">Eryk Salvaggio</a> articulates myths animating AI design, illustrates the nature of creativity and generated media, and artfully reframes the discourse on GenAI and art.   </p><p>Eryk joined Kimberly to discuss myths and metaphors in GenAI design; the illusion of control; if AI saves time and what for; not relying on futuristic AI to solve problems; the fallacy of scale; the dehumanizing narrative of human equivalence; positive biases toward AI; why asking ‘is the machine creative’ misses the mark; creative expression and meaning making; what AI generated art represents; distinguishing archives from datasets; curation as an act of care; representation and context in generated media; the Orwellian view of mass surveillance as anonymity; complicity and critique of GenAI tools; abstraction and noise; and what we aren’t doing when we use GenAI. </p><p> </p><p><a href="https://www.linkedin.com/in/eryk-salvaggio/">Eryk Salvaggio</a> is a new media artist, Visiting Professor in Humanities, Computing and Design at the Rochester Institute of Technology, and an Emerging Technology Research Advisor at the <a href="https://www.siegelendowment.org/">Siegel Family Endowment</a>. Eryk is also a researcher on the <a href="https://aipedagogy.org/">AI Pedagogies Project</a> at Harvard University’s <a href="https://cyber.harvard.edu/research/metalab">metaLab</a> and lecturer on Responsible AI at Elisava Barcelona School of Design and Engineering.  </p><p> </p><p>Addition Resources:  </p><p>Cybernetic Forests:  <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___http%3A%2F%2Fmail.cyberneticforests.com___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NDFmYmFlMWY4ODFmMjNjZTJjNzljMzJmZTg4NmU2ZTA6Nzo4NTIxOjUyZGNlYmRkYTEzNGU1NjE4MDZlNjJlNDMzZGY4YTQ1OTM3OWUzNmY3MTcyMjZjMTlmNTk3Y2ZkZTg0NTFkOGQ6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cae1b6556de48483e36a708dd2b4245de%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638714286023790340%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=iiyNikllM0s30xMLwCP4Z0reJpBXvnSPw%2Br%2F07arM8g%3D&amp;reserved=0">mail.cyberneticforests.com</a> </p><p>The Age of Noise: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fmail.cyberneticforests.com%2Fthe-age-of-noise%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NDFmYmFlMWY4ODFmMjNjZTJjNzljMzJmZTg4NmU2ZTA6Nzo2ZDhjOjU3N2NhMzVkNmZkYzU0M2QwNjBiNTExZDZlMzBmMTVhYzAzOTJjOWQ3ZTdmODU1MmY5YTQ4Mjg5ZjYwNTNkMzg6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cae1b6556de48483e36a708dd2b4245de%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638714286023811447%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=Hete%2BLZiaR85VefbaP41Q2MdeyibFNguKw0IZCEfSoU%3D&amp;reserved=0">https://mail.cyberneticforests.com/the-age-of-noise/</a> </p><p>Challenging the Myths of Generative AI: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.techpolicy.press%2Fchallenging-the-myths-of-generative-ai%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NDFmYmFlMWY4ODFmMjNjZTJjNzljMzJmZTg4NmU2ZTA6Nzo5N2U5OmEwM2MzNjMwM2M4YzY0YTEwNWU3MjlhNWJkNzg0NWNlODc5ZDlkNDQxMDdmYzYwNmEzZWU1ZTZhYTY2NWY3YzI6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cae1b6556de48483e36a708dd2b4245de%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638714286023821313%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=%2BGD26ugwXM8P9IIPD33EVLDIV01Vm86peZQz9kbWKwQ%3D&amp;reserved=0">https://www.techpolicy.press/challenging-the-myths-of-generative-ai/</a> </p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep63/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 08 Jan 2025 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/ad54320e/b295ecb3.mp3" length="85305033" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/z1mH25jFAgSjQm52CI1PYVCojshOLm4zGxJyX-I5GnQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iYzE5/ZDRhZjJiMzQyZjU3/MGVjNDYyOTdlMTgx/ODJhMi5qcGc.jpg"/>
      <itunes:duration>3510</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/eryk-salvaggio/">Eryk Salvaggio</a> articulates myths animating AI design, illustrates the nature of creativity and generated media, and artfully reframes the discourse on GenAI and art.   </p><p>Eryk joined Kimberly to discuss myths and metaphors in GenAI design; the illusion of control; if AI saves time and what for; not relying on futuristic AI to solve problems; the fallacy of scale; the dehumanizing narrative of human equivalence; positive biases toward AI; why asking ‘is the machine creative’ misses the mark; creative expression and meaning making; what AI generated art represents; distinguishing archives from datasets; curation as an act of care; representation and context in generated media; the Orwellian view of mass surveillance as anonymity; complicity and critique of GenAI tools; abstraction and noise; and what we aren’t doing when we use GenAI. </p><p> </p><p><a href="https://www.linkedin.com/in/eryk-salvaggio/">Eryk Salvaggio</a> is a new media artist, Visiting Professor in Humanities, Computing and Design at the Rochester Institute of Technology, and an Emerging Technology Research Advisor at the <a href="https://www.siegelendowment.org/">Siegel Family Endowment</a>. Eryk is also a researcher on the <a href="https://aipedagogy.org/">AI Pedagogies Project</a> at Harvard University’s <a href="https://cyber.harvard.edu/research/metalab">metaLab</a> and lecturer on Responsible AI at Elisava Barcelona School of Design and Engineering.  </p><p> </p><p>Addition Resources:  </p><p>Cybernetic Forests:  <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___http%3A%2F%2Fmail.cyberneticforests.com___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NDFmYmFlMWY4ODFmMjNjZTJjNzljMzJmZTg4NmU2ZTA6Nzo4NTIxOjUyZGNlYmRkYTEzNGU1NjE4MDZlNjJlNDMzZGY4YTQ1OTM3OWUzNmY3MTcyMjZjMTlmNTk3Y2ZkZTg0NTFkOGQ6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cae1b6556de48483e36a708dd2b4245de%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638714286023790340%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=iiyNikllM0s30xMLwCP4Z0reJpBXvnSPw%2Br%2F07arM8g%3D&amp;reserved=0">mail.cyberneticforests.com</a> </p><p>The Age of Noise: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fmail.cyberneticforests.com%2Fthe-age-of-noise%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NDFmYmFlMWY4ODFmMjNjZTJjNzljMzJmZTg4NmU2ZTA6Nzo2ZDhjOjU3N2NhMzVkNmZkYzU0M2QwNjBiNTExZDZlMzBmMTVhYzAzOTJjOWQ3ZTdmODU1MmY5YTQ4Mjg5ZjYwNTNkMzg6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cae1b6556de48483e36a708dd2b4245de%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638714286023811447%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=Hete%2BLZiaR85VefbaP41Q2MdeyibFNguKw0IZCEfSoU%3D&amp;reserved=0">https://mail.cyberneticforests.com/the-age-of-noise/</a> </p><p>Challenging the Myths of Generative AI: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.techpolicy.press%2Fchallenging-the-myths-of-generative-ai%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NDFmYmFlMWY4ODFmMjNjZTJjNzljMzJmZTg4NmU2ZTA6Nzo5N2U5OmEwM2MzNjMwM2M4YzY0YTEwNWU3MjlhNWJkNzg0NWNlODc5ZDlkNDQxMDdmYzYwNmEzZWU1ZTZhYTY2NWY3YzI6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cae1b6556de48483e36a708dd2b4245de%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638714286023821313%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=%2BGD26ugwXM8P9IIPD33EVLDIV01Vm86peZQz9kbWKwQ%3D&amp;reserved=0">https://www.techpolicy.press/challenging-the-myths-of-generative-ai/</a> </p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep63/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/eryk-salvaggio" img="https://img.transistorcdn.com/kYGL9udynBgsHbCxwUp-zrQEP_P_BHm-TyjAqlB_R58/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iODRj/Y2I2ZjlkN2FiODky/MDNjZTFkMTU4ZWFm/MDlkMi5qcGc.jpg">Eryk Salvaggio</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/ad54320e/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Challenging AI with Geertrui Mieke de Ketelaere</title>
      <itunes:episode>62</itunes:episode>
      <podcast:episode>62</podcast:episode>
      <itunes:title>Challenging AI with Geertrui Mieke de Ketelaere</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9bf0595d-f32c-406e-b1ae-5fb198b67a9d</guid>
      <link>https://share.transistor.fm/s/f377eccf</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/miekedeketelaere/">Geertrui Mieke de Ketelaere</a> reflects on the uncertain trajectory of AI, whether AI is socially or environmentally sustainable, and using AI to become good ancestors.   </p><p>Mieke joined Kimberly to discuss the current trajectory of AI; uncertainties created by current AI applications; the potent intersection of humanlike AI and heightened social/personal anxiety; Russian nesting dolls (matryoshka) as an analogy for AI systems; challenges with open source AI; the current state of public literacy and regulation; the <a href="https://www.saicc.info/">Safe AI Companion Collective</a>; social and environmental sustainability; expanding our POV beyond human intelligence; and striving to become good ancestors in our use of AI and beyond.  </p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep62/transcript">here</a>. </p><p><a href="https://www.linkedin.com/in/miekedeketelaere/">Geertrui Mieke de Ketelaere</a> is an engineer, strategic advisor and Adjunct Professor of AI at Vlerick Business School focused on sustainable, ethical, and trustworthy AI. A prolific author, speaker and researcher, Mieke is passionate about building bridges between business, research and government in the domain of AI. Learn more about Mieke’s work here: <a href="http://www.gmdeketelaere.com/">www.gmdeketelaere.com</a> </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/miekedeketelaere/">Geertrui Mieke de Ketelaere</a> reflects on the uncertain trajectory of AI, whether AI is socially or environmentally sustainable, and using AI to become good ancestors.   </p><p>Mieke joined Kimberly to discuss the current trajectory of AI; uncertainties created by current AI applications; the potent intersection of humanlike AI and heightened social/personal anxiety; Russian nesting dolls (matryoshka) as an analogy for AI systems; challenges with open source AI; the current state of public literacy and regulation; the <a href="https://www.saicc.info/">Safe AI Companion Collective</a>; social and environmental sustainability; expanding our POV beyond human intelligence; and striving to become good ancestors in our use of AI and beyond.  </p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep62/transcript">here</a>. </p><p><a href="https://www.linkedin.com/in/miekedeketelaere/">Geertrui Mieke de Ketelaere</a> is an engineer, strategic advisor and Adjunct Professor of AI at Vlerick Business School focused on sustainable, ethical, and trustworthy AI. A prolific author, speaker and researcher, Mieke is passionate about building bridges between business, research and government in the domain of AI. Learn more about Mieke’s work here: <a href="http://www.gmdeketelaere.com/">www.gmdeketelaere.com</a> </p>]]>
      </content:encoded>
      <pubDate>Wed, 18 Dec 2024 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/f377eccf/0cdaffe8.mp3" length="68093119" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/27FP4CnZhat1V5Y6s0jO9vMUQg1WYm4sRVbgv5JR7ro/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMjM3/OTE4ZjY3ZjkzY2M5/NzlkMzViMzc5NDA1/Yzk0ZS5qcGc.jpg"/>
      <itunes:duration>2828</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/miekedeketelaere/">Geertrui Mieke de Ketelaere</a> reflects on the uncertain trajectory of AI, whether AI is socially or environmentally sustainable, and using AI to become good ancestors.   </p><p>Mieke joined Kimberly to discuss the current trajectory of AI; uncertainties created by current AI applications; the potent intersection of humanlike AI and heightened social/personal anxiety; Russian nesting dolls (matryoshka) as an analogy for AI systems; challenges with open source AI; the current state of public literacy and regulation; the <a href="https://www.saicc.info/">Safe AI Companion Collective</a>; social and environmental sustainability; expanding our POV beyond human intelligence; and striving to become good ancestors in our use of AI and beyond.  </p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep62/transcript">here</a>. </p><p><a href="https://www.linkedin.com/in/miekedeketelaere/">Geertrui Mieke de Ketelaere</a> is an engineer, strategic advisor and Adjunct Professor of AI at Vlerick Business School focused on sustainable, ethical, and trustworthy AI. A prolific author, speaker and researcher, Mieke is passionate about building bridges between business, research and government in the domain of AI. Learn more about Mieke’s work here: <a href="http://www.gmdeketelaere.com/">www.gmdeketelaere.com</a> </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/geertrui-mieke-de-ketelaere" img="https://img.transistorcdn.com/4dKCYWiQqdXIcSP8CwpqCq5TI82gslkbxmDnJeWaVXk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wMjI5/M2YxNjM3MzdlNzFm/OTBiMTA4YzIxYzFm/YmFjNy5wbmc.jpg">Geertrui Mieke de Ketelaere</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/f377eccf/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Safety by Design with Vaishnavi J</title>
      <itunes:episode>61</itunes:episode>
      <podcast:episode>61</podcast:episode>
      <itunes:title>Safety by Design with Vaishnavi J</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">447e14e0-2e23-40f1-8f24-26928c4fb98f</guid>
      <link>https://share.transistor.fm/s/fddc3437</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/vaishnavij/">Vaishnavi J</a> respects youth, advises considering the youth experience in all digital products, and asserts age-appropriate design is an underappreciated business asset.  </p><p>Vaishnavi joined Kimberly to discuss: the spaces youth inhabit online; the four pillars of safety by design; age-appropriate design choices; kids’ unique needs and vulnerabilities; what both digital libertarians and abstentionists get wrong; why great experiences and safety aren’t mutually exclusive; how younger cohorts perceive harm; centering youth experiences; business benefits of age-appropriate design; KOSPA and the duty of care; implications for content policy and product roadmaps; the youth experience as digital table stakes and an engine of growth. </p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep61/transcript">here</a>. </p><p><a href="https://www.linkedin.com/in/vaishnavij/">Vaishnavi J</a> is the founder and principal of <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___http%3A%2F%2Fwww.vyanams.com___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86ZTliMDkwNzY5YjY1YTExMjVhNTBhYjRjMWQxNzdjMzA6NzplYjk4OjZiNzU1OWEzY2U3ZTgwNmQzYzk5YmQxOGJmMmY4YWZkM2M4YzczMGMxNjY2ZjE2NGRmNjQyODFjZjRkNmYwZTc6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C2fa1328b047448cae83b08dd14a1dc3f%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638689407624601774%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=%2FPB3ImIXvbz3r7jWyTztPna5OejeNSWbSmAgrmLY4fg%3D&amp;reserved=0">Vyanams Strategies (VYS)</a>, helping companies, civil society, and governments build healthier online communities for young people. VYS leverages extensive experience at leading technology companies to develop tactical product and policy solutions for child safety and privacy. These range from product guidance, content policies, operations workflows, trust &amp; safety strategies, and organizational design. </p><p>Additional Resources:  </p><p>Monthly Youth Tech Policy Brief: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fquire.substack.com___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NzJkMjIyN2RhNDcxNTczZGMyOThmZGY1ZTdmMmI1MTE6NzowODQyOjE0OWQ0M2IzMDJkNTBhYzM5MmI5MTg3N2YyMTkzNjllZWU2ZGFmNmQ0NjI2OWZlNDZhMDUyNDU0OTAyODk2ZDQ6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cf76d205a7096482c2a7808dcf9f9c747%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638660098439363698%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&amp;sdata=cP5mLb3dQsnA76HOVdjAf2jzgucdg9%2B9lThk7xKmqi0%3D&amp;reserved=0">https://quire.substack.com</a> </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/vaishnavij/">Vaishnavi J</a> respects youth, advises considering the youth experience in all digital products, and asserts age-appropriate design is an underappreciated business asset.  </p><p>Vaishnavi joined Kimberly to discuss: the spaces youth inhabit online; the four pillars of safety by design; age-appropriate design choices; kids’ unique needs and vulnerabilities; what both digital libertarians and abstentionists get wrong; why great experiences and safety aren’t mutually exclusive; how younger cohorts perceive harm; centering youth experiences; business benefits of age-appropriate design; KOSPA and the duty of care; implications for content policy and product roadmaps; the youth experience as digital table stakes and an engine of growth. </p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep61/transcript">here</a>. </p><p><a href="https://www.linkedin.com/in/vaishnavij/">Vaishnavi J</a> is the founder and principal of <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___http%3A%2F%2Fwww.vyanams.com___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86ZTliMDkwNzY5YjY1YTExMjVhNTBhYjRjMWQxNzdjMzA6NzplYjk4OjZiNzU1OWEzY2U3ZTgwNmQzYzk5YmQxOGJmMmY4YWZkM2M4YzczMGMxNjY2ZjE2NGRmNjQyODFjZjRkNmYwZTc6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C2fa1328b047448cae83b08dd14a1dc3f%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638689407624601774%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=%2FPB3ImIXvbz3r7jWyTztPna5OejeNSWbSmAgrmLY4fg%3D&amp;reserved=0">Vyanams Strategies (VYS)</a>, helping companies, civil society, and governments build healthier online communities for young people. VYS leverages extensive experience at leading technology companies to develop tactical product and policy solutions for child safety and privacy. These range from product guidance, content policies, operations workflows, trust &amp; safety strategies, and organizational design. </p><p>Additional Resources:  </p><p>Monthly Youth Tech Policy Brief: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fquire.substack.com___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NzJkMjIyN2RhNDcxNTczZGMyOThmZGY1ZTdmMmI1MTE6NzowODQyOjE0OWQ0M2IzMDJkNTBhYzM5MmI5MTg3N2YyMTkzNjllZWU2ZGFmNmQ0NjI2OWZlNDZhMDUyNDU0OTAyODk2ZDQ6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cf76d205a7096482c2a7808dcf9f9c747%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638660098439363698%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&amp;sdata=cP5mLb3dQsnA76HOVdjAf2jzgucdg9%2B9lThk7xKmqi0%3D&amp;reserved=0">https://quire.substack.com</a> </p>]]>
      </content:encoded>
      <pubDate>Wed, 04 Dec 2024 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/fddc3437/36041d65.mp3" length="69060179" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/6Aupnqgp_k5cRspc8qpGtnwUw8ifLhuuYFlxYyqbVjA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yNDRj/NzQwYmQ3N2MwNjA2/MDI2OGRkNGQ2YTVl/ODhlYy5qcGc.jpg"/>
      <itunes:duration>2869</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/vaishnavij/">Vaishnavi J</a> respects youth, advises considering the youth experience in all digital products, and asserts age-appropriate design is an underappreciated business asset.  </p><p>Vaishnavi joined Kimberly to discuss: the spaces youth inhabit online; the four pillars of safety by design; age-appropriate design choices; kids’ unique needs and vulnerabilities; what both digital libertarians and abstentionists get wrong; why great experiences and safety aren’t mutually exclusive; how younger cohorts perceive harm; centering youth experiences; business benefits of age-appropriate design; KOSPA and the duty of care; implications for content policy and product roadmaps; the youth experience as digital table stakes and an engine of growth. </p><p> </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep61/transcript">here</a>. </p><p><a href="https://www.linkedin.com/in/vaishnavij/">Vaishnavi J</a> is the founder and principal of <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___http%3A%2F%2Fwww.vyanams.com___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86ZTliMDkwNzY5YjY1YTExMjVhNTBhYjRjMWQxNzdjMzA6NzplYjk4OjZiNzU1OWEzY2U3ZTgwNmQzYzk5YmQxOGJmMmY4YWZkM2M4YzczMGMxNjY2ZjE2NGRmNjQyODFjZjRkNmYwZTc6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7C2fa1328b047448cae83b08dd14a1dc3f%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638689407624601774%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=%2FPB3ImIXvbz3r7jWyTztPna5OejeNSWbSmAgrmLY4fg%3D&amp;reserved=0">Vyanams Strategies (VYS)</a>, helping companies, civil society, and governments build healthier online communities for young people. VYS leverages extensive experience at leading technology companies to develop tactical product and policy solutions for child safety and privacy. These range from product guidance, content policies, operations workflows, trust &amp; safety strategies, and organizational design. </p><p>Additional Resources:  </p><p>Monthly Youth Tech Policy Brief: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fquire.substack.com___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NzJkMjIyN2RhNDcxNTczZGMyOThmZGY1ZTdmMmI1MTE6NzowODQyOjE0OWQ0M2IzMDJkNTBhYzM5MmI5MTg3N2YyMTkzNjllZWU2ZGFmNmQ0NjI2OWZlNDZhMDUyNDU0OTAyODk2ZDQ6aDpUOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Cf76d205a7096482c2a7808dcf9f9c747%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638660098439363698%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&amp;sdata=cP5mLb3dQsnA76HOVdjAf2jzgucdg9%2B9lThk7xKmqi0%3D&amp;reserved=0">https://quire.substack.com</a> </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/vaishnavi-j" img="https://img.transistorcdn.com/lJ1lHdslpwHOCLNJcv3WuOipQN68Go3ZOhUq36nmKaI/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jNjZj/YmM1NDQyYjEyOWQ2/MzE4NmIzNjc5YjMw/MWFjNS5qcGc.jpg">Vaishnavi J</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/fddc3437/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Critical Planning with Ron Schmelzer and Kathleen Walch </title>
      <itunes:episode>60</itunes:episode>
      <podcast:episode>60</podcast:episode>
      <itunes:title>Critical Planning with Ron Schmelzer and Kathleen Walch </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">30cef918-87c5-466c-9861-37cddc88b683</guid>
      <link>https://share.transistor.fm/s/b9f48ebd</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/kathleen-walch-50185112/">Kathleen Walch</a> and <a href="https://www.linkedin.com/in/rschmelzer/">Ron Schmelzer</a> analyze AI patterns and factors hindering adoption, why AI is never ‘set it and forget it’, and the criticality of critical thinking.   </p><p>The dynamic duo behind Cognilytica (now PMI) join Kimberly to discuss: the seven (7) patterns of AI; fears and concerns stymying AI adoption; the tension between top-down and bottom-ups AI adoption; the AI value proposition; what differentiates CPMAI from good old-fashioned project management; AI’s Red Queen moment; critical thinking as a uniquely human skill; the DKIUW pyramid and limits of machine understanding; why you can’t sit AI out.  </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep60/transcript">here</a>. </p><p><a href="https://www.linkedin.com/in/kathleen-walch-50185112/">Kathleen Walch</a> and <a href="https://www.linkedin.com/in/rschmelzer/">Ron Schmelzer</a> are the co-founders of <a href="https://www.cognilytica.com/">Cognilytica</a>, an AI research and analyst firm which was acquired by PMI (Project Management Institute) in September 2024. Their work, which includes the CPMAI project management methodology and the top-rated AI Today podcast, focuses on enabling AI adoption and skill development.  </p><p>Additional Resources:   </p><p>CPMAI certification: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fcourses.cognilytica.com%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NGI3NDA2MThiNmRhODA1MzMwNDBkYTM3MWM2ODMyMzE6NzplNzZkOjM5ODQ5NjNiYmEzNGYyZTE3N2FjODQ1N2RiNjhhZWNhNjc2MDQxNWJjNDEyZDA1ZjBhNzRmYWZhMWNmZWJjOWY6aDpGOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Caeefcdf664a04bd59fe508dced659505%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638646267866369953%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&amp;sdata=gKovSMW9v2%2F9NYm1abPLqMNTHBhwhYk8UjhEnyFp7TU%3D&amp;reserved=0">https://courses.cognilytica.com/</a> </p><p>AI Today podcast: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.cognilytica.com%2Faitoday%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NGI3NDA2MThiNmRhODA1MzMwNDBkYTM3MWM2ODMyMzE6NzpiODUyOjg1YWY3ZGVmNDYyYjdlOWI3NDIyMDQ2ZWJjOTdmZGNjNmViYTZiMzdiZTMxMzU0MGMzNzY1OGQ0N2U2MTY0NzE6aDpGOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Caeefcdf664a04bd59fe508dced659505%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638646267866392440%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&amp;sdata=ELJjEbiMwa6hbmoN%2Bn7ehn%2FiuYqWj7cH%2BAqJOQfoEdM%3D&amp;reserved=0">https://www.cognilytica.com/aitoday/</a> </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/kathleen-walch-50185112/">Kathleen Walch</a> and <a href="https://www.linkedin.com/in/rschmelzer/">Ron Schmelzer</a> analyze AI patterns and factors hindering adoption, why AI is never ‘set it and forget it’, and the criticality of critical thinking.   </p><p>The dynamic duo behind Cognilytica (now PMI) join Kimberly to discuss: the seven (7) patterns of AI; fears and concerns stymying AI adoption; the tension between top-down and bottom-ups AI adoption; the AI value proposition; what differentiates CPMAI from good old-fashioned project management; AI’s Red Queen moment; critical thinking as a uniquely human skill; the DKIUW pyramid and limits of machine understanding; why you can’t sit AI out.  </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep60/transcript">here</a>. </p><p><a href="https://www.linkedin.com/in/kathleen-walch-50185112/">Kathleen Walch</a> and <a href="https://www.linkedin.com/in/rschmelzer/">Ron Schmelzer</a> are the co-founders of <a href="https://www.cognilytica.com/">Cognilytica</a>, an AI research and analyst firm which was acquired by PMI (Project Management Institute) in September 2024. Their work, which includes the CPMAI project management methodology and the top-rated AI Today podcast, focuses on enabling AI adoption and skill development.  </p><p>Additional Resources:   </p><p>CPMAI certification: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fcourses.cognilytica.com%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NGI3NDA2MThiNmRhODA1MzMwNDBkYTM3MWM2ODMyMzE6NzplNzZkOjM5ODQ5NjNiYmEzNGYyZTE3N2FjODQ1N2RiNjhhZWNhNjc2MDQxNWJjNDEyZDA1ZjBhNzRmYWZhMWNmZWJjOWY6aDpGOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Caeefcdf664a04bd59fe508dced659505%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638646267866369953%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&amp;sdata=gKovSMW9v2%2F9NYm1abPLqMNTHBhwhYk8UjhEnyFp7TU%3D&amp;reserved=0">https://courses.cognilytica.com/</a> </p><p>AI Today podcast: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.cognilytica.com%2Faitoday%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NGI3NDA2MThiNmRhODA1MzMwNDBkYTM3MWM2ODMyMzE6NzpiODUyOjg1YWY3ZGVmNDYyYjdlOWI3NDIyMDQ2ZWJjOTdmZGNjNmViYTZiMzdiZTMxMzU0MGMzNzY1OGQ0N2U2MTY0NzE6aDpGOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Caeefcdf664a04bd59fe508dced659505%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638646267866392440%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&amp;sdata=ELJjEbiMwa6hbmoN%2Bn7ehn%2FiuYqWj7cH%2BAqJOQfoEdM%3D&amp;reserved=0">https://www.cognilytica.com/aitoday/</a> </p>]]>
      </content:encoded>
      <pubDate>Wed, 20 Nov 2024 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/b9f48ebd/b2147723.mp3" length="69993497" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/_xjNTT_s9JpV4qgxLI2ItDAPFuLnQ5NMJliT2eAa0YE/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lYzUx/YWRhNzkxNjg0ZGJk/M2U3YzhiZjQ2NjMy/N2Q1MS5qcGc.jpg"/>
      <itunes:duration>2902</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/kathleen-walch-50185112/">Kathleen Walch</a> and <a href="https://www.linkedin.com/in/rschmelzer/">Ron Schmelzer</a> analyze AI patterns and factors hindering adoption, why AI is never ‘set it and forget it’, and the criticality of critical thinking.   </p><p>The dynamic duo behind Cognilytica (now PMI) join Kimberly to discuss: the seven (7) patterns of AI; fears and concerns stymying AI adoption; the tension between top-down and bottom-ups AI adoption; the AI value proposition; what differentiates CPMAI from good old-fashioned project management; AI’s Red Queen moment; critical thinking as a uniquely human skill; the DKIUW pyramid and limits of machine understanding; why you can’t sit AI out.  </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep60/transcript">here</a>. </p><p><a href="https://www.linkedin.com/in/kathleen-walch-50185112/">Kathleen Walch</a> and <a href="https://www.linkedin.com/in/rschmelzer/">Ron Schmelzer</a> are the co-founders of <a href="https://www.cognilytica.com/">Cognilytica</a>, an AI research and analyst firm which was acquired by PMI (Project Management Institute) in September 2024. Their work, which includes the CPMAI project management methodology and the top-rated AI Today podcast, focuses on enabling AI adoption and skill development.  </p><p>Additional Resources:   </p><p>CPMAI certification: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fcourses.cognilytica.com%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NGI3NDA2MThiNmRhODA1MzMwNDBkYTM3MWM2ODMyMzE6NzplNzZkOjM5ODQ5NjNiYmEzNGYyZTE3N2FjODQ1N2RiNjhhZWNhNjc2MDQxNWJjNDEyZDA1ZjBhNzRmYWZhMWNmZWJjOWY6aDpGOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Caeefcdf664a04bd59fe508dced659505%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638646267866369953%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&amp;sdata=gKovSMW9v2%2F9NYm1abPLqMNTHBhwhYk8UjhEnyFp7TU%3D&amp;reserved=0">https://courses.cognilytica.com/</a> </p><p>AI Today podcast: <a href="https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fwww.cognilytica.com%2Faitoday%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86NGI3NDA2MThiNmRhODA1MzMwNDBkYTM3MWM2ODMyMzE6NzpiODUyOjg1YWY3ZGVmNDYyYjdlOWI3NDIyMDQ2ZWJjOTdmZGNjNmViYTZiMzdiZTMxMzU0MGMzNzY1OGQ0N2U2MTY0NzE6aDpGOk4&amp;data=05%7C02%7CKimberly.Nevala%40sas.com%7Caeefcdf664a04bd59fe508dced659505%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638646267866392440%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&amp;sdata=ELJjEbiMwa6hbmoN%2Bn7ehn%2FiuYqWj7cH%2BAqJOQfoEdM%3D&amp;reserved=0">https://www.cognilytica.com/aitoday/</a> </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/kathleen-walch" img="https://img.transistorcdn.com/jKtuUKvcQNfYqf5WzfiANMsaF8l7ZuAt8fpQT9qPdc8/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85MzI2/MjdhNTJiNDcwMTFi/NjU3YWU2MWJiZGI2/Nzk5YS5wbmc.jpg">Kathleen Walch</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/ron-schmelzer" img="https://img.transistorcdn.com/98xu9eao9I0bekFa0Kh27nFICzyzycxFkL3YQBBm8w0/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZWUx/M2FmMDc0NmFjMjZk/OWQyMGIxYmVmODdh/OWI5MC5qcGc.jpg">Ron Schmelzer</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/b9f48ebd/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Relating to AI with Dr. Marisa Tschopp </title>
      <itunes:episode>59</itunes:episode>
      <podcast:episode>59</podcast:episode>
      <itunes:title>Relating to AI with Dr. Marisa Tschopp </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">86a36088-2551-4bef-89cd-b4cd4c4d196c</guid>
      <link>https://share.transistor.fm/s/f2ce69da</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/dr-marisa-tschopp-0233a026/">Dr. Marisa Tschopp</a> explores our evolving, often odd, expectations for AI companions while embracing radical empathy, resisting relentless PR and trusting in humanity.  </p><p>Marisa and Kimberly discuss recent research into AI-based conversational agents, the limits of artificial companionship, implications for mental health therapy, the importance of radical empathy and differentiation, why users defy simplistic categorization, corporate incentives and rampant marketing gags, reasons for optimism, and retaining trust in human connections. A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep59/transcript">here</a>. </p><p><a href="https://www.linkedin.com/in/dr-marisa-tschopp-0233a026/">Dr. Marisa Tschopp</a> is a Psychologist, a Human-AI Interaction Researcher at scip AG and an ardent supporter of Women in AI. Marisa’s research focuses on human-AI relationships, trust in AI, agency, behavioral performance assessment of conversational systems (A-IQ), and gender issues in AI.  </p><p>Additional Resources:</p><p><a href="https://protect.checkpoint.com/v2/r01/___https:/stars.library.ucf.edu/cgi/viewcontent.cgi?article=1114&amp;context=hmc___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86ZDk2NWIyYzIyYTdjY2I5NTI1NGY3YzY4MmVhNzY3ZjE6NzpmYmVmOmJmMjA1NDliMzIzM2I2ZGM2ZjU1ZGE4ZDJjNWVjZDYxZTUxMTZlZmYzNDg3NWY5MjQ1MzY5Nzc0MDhiOTIwMDA6aDpUOk4">The Impact of Human-AI Relationship Perception on Voice Shopping Intentions</a> in Human Machine Collaboration  <a href="https://stars.library.ucf.edu/cgi/viewcontent.cgi?article=1114&amp;context=hmc"><em>Publication</em></a> </p><p>How do users perceive their relationship with conversational AI? <a href="https://cyberpsychology.eu/article/view/21003%22%20/o%20%22Protected%20by%20Check%20Point:%20https://cyberpsychology.eu/article/view/21003"><em>Publication</em></a> </p><p>KI als Freundin: Funktioniert eine Chatbot-Beziehung? <em>TV Show (German, SRF)</em> </p><p>Friends with AI? It’s complicated! <a href="https://protect.checkpoint.com/v2/r01/___https:/www.ted.com/talks/marisa_tschopp_friends_with_ai_it_s_complicated___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86ZDk2NWIyYzIyYTdjY2I5NTI1NGY3YzY4MmVhNzY3ZjE6NzplZTAxOmI5ZTA3MDA3NzRiZmQ5ZjI2YzhiZWEyYTg0YzNiODNlMTYwM2E2NDE2Y2Q3ZDA0ZTE0MDE4ODM5YTFiOGU4ZGI6aDpUOk4"><em>TEDxBoston Talk</em></a> </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/dr-marisa-tschopp-0233a026/">Dr. Marisa Tschopp</a> explores our evolving, often odd, expectations for AI companions while embracing radical empathy, resisting relentless PR and trusting in humanity.  </p><p>Marisa and Kimberly discuss recent research into AI-based conversational agents, the limits of artificial companionship, implications for mental health therapy, the importance of radical empathy and differentiation, why users defy simplistic categorization, corporate incentives and rampant marketing gags, reasons for optimism, and retaining trust in human connections. A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep59/transcript">here</a>. </p><p><a href="https://www.linkedin.com/in/dr-marisa-tschopp-0233a026/">Dr. Marisa Tschopp</a> is a Psychologist, a Human-AI Interaction Researcher at scip AG and an ardent supporter of Women in AI. Marisa’s research focuses on human-AI relationships, trust in AI, agency, behavioral performance assessment of conversational systems (A-IQ), and gender issues in AI.  </p><p>Additional Resources:</p><p><a href="https://protect.checkpoint.com/v2/r01/___https:/stars.library.ucf.edu/cgi/viewcontent.cgi?article=1114&amp;context=hmc___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86ZDk2NWIyYzIyYTdjY2I5NTI1NGY3YzY4MmVhNzY3ZjE6NzpmYmVmOmJmMjA1NDliMzIzM2I2ZGM2ZjU1ZGE4ZDJjNWVjZDYxZTUxMTZlZmYzNDg3NWY5MjQ1MzY5Nzc0MDhiOTIwMDA6aDpUOk4">The Impact of Human-AI Relationship Perception on Voice Shopping Intentions</a> in Human Machine Collaboration  <a href="https://stars.library.ucf.edu/cgi/viewcontent.cgi?article=1114&amp;context=hmc"><em>Publication</em></a> </p><p>How do users perceive their relationship with conversational AI? <a href="https://cyberpsychology.eu/article/view/21003%22%20/o%20%22Protected%20by%20Check%20Point:%20https://cyberpsychology.eu/article/view/21003"><em>Publication</em></a> </p><p>KI als Freundin: Funktioniert eine Chatbot-Beziehung? <em>TV Show (German, SRF)</em> </p><p>Friends with AI? It’s complicated! <a href="https://protect.checkpoint.com/v2/r01/___https:/www.ted.com/talks/marisa_tschopp_friends_with_ai_it_s_complicated___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86ZDk2NWIyYzIyYTdjY2I5NTI1NGY3YzY4MmVhNzY3ZjE6NzplZTAxOmI5ZTA3MDA3NzRiZmQ5ZjI2YzhiZWEyYTg0YzNiODNlMTYwM2E2NDE2Y2Q3ZDA0ZTE0MDE4ODM5YTFiOGU4ZGI6aDpUOk4"><em>TEDxBoston Talk</em></a> </p>]]>
      </content:encoded>
      <pubDate>Wed, 06 Nov 2024 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/f2ce69da/5f2c5b83.mp3" length="60574996" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/tkOe4nlXmRFOEPsEsdtVcHUYvNaKcBUAY6YRGq-FgWQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82NDA2/MjRkNGZjMzA2OTZh/MDk5MWY3OGFhM2U0/YWIwMy5qcGc.jpg"/>
      <itunes:duration>2515</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/dr-marisa-tschopp-0233a026/">Dr. Marisa Tschopp</a> explores our evolving, often odd, expectations for AI companions while embracing radical empathy, resisting relentless PR and trusting in humanity.  </p><p>Marisa and Kimberly discuss recent research into AI-based conversational agents, the limits of artificial companionship, implications for mental health therapy, the importance of radical empathy and differentiation, why users defy simplistic categorization, corporate incentives and rampant marketing gags, reasons for optimism, and retaining trust in human connections. A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep59/transcript">here</a>. </p><p><a href="https://www.linkedin.com/in/dr-marisa-tschopp-0233a026/">Dr. Marisa Tschopp</a> is a Psychologist, a Human-AI Interaction Researcher at scip AG and an ardent supporter of Women in AI. Marisa’s research focuses on human-AI relationships, trust in AI, agency, behavioral performance assessment of conversational systems (A-IQ), and gender issues in AI.  </p><p>Additional Resources:</p><p><a href="https://protect.checkpoint.com/v2/r01/___https:/stars.library.ucf.edu/cgi/viewcontent.cgi?article=1114&amp;context=hmc___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86ZDk2NWIyYzIyYTdjY2I5NTI1NGY3YzY4MmVhNzY3ZjE6NzpmYmVmOmJmMjA1NDliMzIzM2I2ZGM2ZjU1ZGE4ZDJjNWVjZDYxZTUxMTZlZmYzNDg3NWY5MjQ1MzY5Nzc0MDhiOTIwMDA6aDpUOk4">The Impact of Human-AI Relationship Perception on Voice Shopping Intentions</a> in Human Machine Collaboration  <a href="https://stars.library.ucf.edu/cgi/viewcontent.cgi?article=1114&amp;context=hmc"><em>Publication</em></a> </p><p>How do users perceive their relationship with conversational AI? <a href="https://cyberpsychology.eu/article/view/21003%22%20/o%20%22Protected%20by%20Check%20Point:%20https://cyberpsychology.eu/article/view/21003"><em>Publication</em></a> </p><p>KI als Freundin: Funktioniert eine Chatbot-Beziehung? <em>TV Show (German, SRF)</em> </p><p>Friends with AI? It’s complicated! <a href="https://protect.checkpoint.com/v2/r01/___https:/www.ted.com/talks/marisa_tschopp_friends_with_ai_it_s_complicated___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86ZDk2NWIyYzIyYTdjY2I5NTI1NGY3YzY4MmVhNzY3ZjE6NzplZTAxOmI5ZTA3MDA3NzRiZmQ5ZjI2YzhiZWEyYTg0YzNiODNlMTYwM2E2NDE2Y2Q3ZDA0ZTE0MDE4ODM5YTFiOGU4ZGI6aDpUOk4"><em>TEDxBoston Talk</em></a> </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://www.scip.ch/en/?team.mats" img="https://img.transistorcdn.com/8EezJ9QOT9aFXeQd0vPx2duaCA3m2lJf1rW6qQkUikU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDFlZDgxMzgt/NmIxMy00MmJiLThi/ZmQtZWRlZmFmZmYx/NTM2LzE2NzMzNzgy/OTYtaW1hZ2UuanBn.jpg">Marisa Tschopp</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/f2ce69da/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Technical Morality with John Danaher</title>
      <itunes:episode>58</itunes:episode>
      <podcast:episode>58</podcast:episode>
      <itunes:title>Technical Morality with John Danaher</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">12abd62a-7786-4a71-a632-eeefc05c5c06</guid>
      <link>https://share.transistor.fm/s/89aac0d3</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/john-danaher-1935b81/">John Danaher</a> assesses how AI may reshape ethical and social norms, minds the anticipatory gap in regulation, and applies the MVPP to decide against digitizing himself.  </p><p>John parlayed an interest in science fiction into researching legal philosophy, emerging technology, and society. Flipping the script on ethical assessment, John identifies six (6) mechanisms by which technology may reshape ethical principles and social norms. John further illustrates the impact AI can have on decision sets and relationships. We then discuss the dilemma articulated by the aptly named anticipatory gap. In which the effort required to regulate nascent tech is proportional to our understanding of its ultimate effects.  </p><p>Finally, we turn our attention to the rapid rise of digital duplicates. John provides examples and proposes a Minimally Viable Permissibility Principle (MVPP) for evaluating the use of digital duplicates. Emphasizing the difficulty of mitigating the risks posed after a digital duplicate is let loose in the wide, John declines the opportunity to digitally duplicate himself.  </p><p><a href="https://www.linkedin.com/in/john-danaher-1935b81/">John Danaher</a> is a Sr. Lecturer in Ethics at the NUI Galway School of Law. A prolific scholar, he is the author of Automation and Utopia: Human Flourishing in a World Without Work (Harvard University Press, 2019). Papers referenced in this episode include <a href="https://link.springer.com/article/10.1007/s43681-024-00513-7">The Ethics of Personalized Digital Duplicates: A Minimal Viability Principle</a> and <a href="https://ieeexplore.ieee.org/document/10556813?source=">How Technology Alters Morality and Why It Matters</a>. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep58/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/john-danaher-1935b81/">John Danaher</a> assesses how AI may reshape ethical and social norms, minds the anticipatory gap in regulation, and applies the MVPP to decide against digitizing himself.  </p><p>John parlayed an interest in science fiction into researching legal philosophy, emerging technology, and society. Flipping the script on ethical assessment, John identifies six (6) mechanisms by which technology may reshape ethical principles and social norms. John further illustrates the impact AI can have on decision sets and relationships. We then discuss the dilemma articulated by the aptly named anticipatory gap. In which the effort required to regulate nascent tech is proportional to our understanding of its ultimate effects.  </p><p>Finally, we turn our attention to the rapid rise of digital duplicates. John provides examples and proposes a Minimally Viable Permissibility Principle (MVPP) for evaluating the use of digital duplicates. Emphasizing the difficulty of mitigating the risks posed after a digital duplicate is let loose in the wide, John declines the opportunity to digitally duplicate himself.  </p><p><a href="https://www.linkedin.com/in/john-danaher-1935b81/">John Danaher</a> is a Sr. Lecturer in Ethics at the NUI Galway School of Law. A prolific scholar, he is the author of Automation and Utopia: Human Flourishing in a World Without Work (Harvard University Press, 2019). Papers referenced in this episode include <a href="https://link.springer.com/article/10.1007/s43681-024-00513-7">The Ethics of Personalized Digital Duplicates: A Minimal Viability Principle</a> and <a href="https://ieeexplore.ieee.org/document/10556813?source=">How Technology Alters Morality and Why It Matters</a>. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep58/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 25 Sep 2024 09:03:42 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/89aac0d3/d889bb03.mp3" length="66499242" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/olV4yiuIVeZk_-9gwJYVWDsnVGupLU0UEqHmDVYBBDk/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81MmEw/NGY3NzhjMjVhZWNh/ODY1NmMyMjRjMTdl/MzZmNC5qcGc.jpg"/>
      <itunes:duration>2763</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/john-danaher-1935b81/">John Danaher</a> assesses how AI may reshape ethical and social norms, minds the anticipatory gap in regulation, and applies the MVPP to decide against digitizing himself.  </p><p>John parlayed an interest in science fiction into researching legal philosophy, emerging technology, and society. Flipping the script on ethical assessment, John identifies six (6) mechanisms by which technology may reshape ethical principles and social norms. John further illustrates the impact AI can have on decision sets and relationships. We then discuss the dilemma articulated by the aptly named anticipatory gap. In which the effort required to regulate nascent tech is proportional to our understanding of its ultimate effects.  </p><p>Finally, we turn our attention to the rapid rise of digital duplicates. John provides examples and proposes a Minimally Viable Permissibility Principle (MVPP) for evaluating the use of digital duplicates. Emphasizing the difficulty of mitigating the risks posed after a digital duplicate is let loose in the wide, John declines the opportunity to digitally duplicate himself.  </p><p><a href="https://www.linkedin.com/in/john-danaher-1935b81/">John Danaher</a> is a Sr. Lecturer in Ethics at the NUI Galway School of Law. A prolific scholar, he is the author of Automation and Utopia: Human Flourishing in a World Without Work (Harvard University Press, 2019). Papers referenced in this episode include <a href="https://link.springer.com/article/10.1007/s43681-024-00513-7">The Ethics of Personalized Digital Duplicates: A Minimal Viability Principle</a> and <a href="https://ieeexplore.ieee.org/document/10556813?source=">How Technology Alters Morality and Why It Matters</a>. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep58/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/john-danaher" img="https://img.transistorcdn.com/6VuBL6vnnWySMycwU4mOqCFZTflb-huyhAk2o9wWUPU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81ZTdj/ZTQ3MDk3ZDE0NTYx/YWRhOTc1ZDc5NTY5/MzQzZS5qcGc.jpg">John Danaher</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/89aac0d3/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Artificial Empathy with Ben Bland</title>
      <itunes:episode>57</itunes:episode>
      <podcast:episode>57</podcast:episode>
      <itunes:title>Artificial Empathy with Ben Bland</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0c55275e-fecc-4742-a576-ec71d512cc4c</guid>
      <link>https://share.transistor.fm/s/6a9cf2ab</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/benbland/">Ben Bland</a> expressively explores emotive AI’s shaky scientific underpinnings, the gap between reality and perception, popular applications, and critical apprehensions. </p><p>Ben exposes the scientific contention surrounding human emotion. He talks terms (emotive? empathic? not telepathic!) and outlines a spectrum of emotive applications. We discuss the powerful, often subtle, and sometimes insidious ways emotion can be leveraged. Ben explains the negative effects of perpetual positivity and why drawing clear red lines around the tech is difficult. </p><p>He also addresses the qualitative sea change brought about by large language models (LLMs), implicit vs explicit design and commercial objectives. Noting that the social and psychological impacts of emotive AI systems have been poorly explored, he muses about the potential to actively evolve your machine’s emotional capability. </p><p>Ben confronts the challenges of defining standards when the language is tricky, the science is shaky, and applications are proliferating. Lastly, Ben jazzes up empathy as a human superpower. While optimistic about empathic AI’s potential, he counsels proceeding with caution. </p><p><a href="https://www.linkedin.com/in/benbland/">Ben Bland</a> is an independent consultant in ethical innovation. An active community contributor, Ben is the Chair of the <a href="https://standards.ieee.org/ieee/7014/7648/?trk=public_post_comment-text">IEEE P7014 </a>Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems and Vice-Chair of <a href="https://standards.ieee.org/ieee/7014.1/11609/">IEEE P7014.1</a> Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep57/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/benbland/">Ben Bland</a> expressively explores emotive AI’s shaky scientific underpinnings, the gap between reality and perception, popular applications, and critical apprehensions. </p><p>Ben exposes the scientific contention surrounding human emotion. He talks terms (emotive? empathic? not telepathic!) and outlines a spectrum of emotive applications. We discuss the powerful, often subtle, and sometimes insidious ways emotion can be leveraged. Ben explains the negative effects of perpetual positivity and why drawing clear red lines around the tech is difficult. </p><p>He also addresses the qualitative sea change brought about by large language models (LLMs), implicit vs explicit design and commercial objectives. Noting that the social and psychological impacts of emotive AI systems have been poorly explored, he muses about the potential to actively evolve your machine’s emotional capability. </p><p>Ben confronts the challenges of defining standards when the language is tricky, the science is shaky, and applications are proliferating. Lastly, Ben jazzes up empathy as a human superpower. While optimistic about empathic AI’s potential, he counsels proceeding with caution. </p><p><a href="https://www.linkedin.com/in/benbland/">Ben Bland</a> is an independent consultant in ethical innovation. An active community contributor, Ben is the Chair of the <a href="https://standards.ieee.org/ieee/7014/7648/?trk=public_post_comment-text">IEEE P7014 </a>Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems and Vice-Chair of <a href="https://standards.ieee.org/ieee/7014.1/11609/">IEEE P7014.1</a> Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep57/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 11 Sep 2024 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/6a9cf2ab/4775cea1.mp3" length="66779267" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/69_HP0K9fa1BnNxdT7JijPVKfC_eYlGnOxYcBBV5CFM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xNDlm/NTkwNGVjNzA2NGQz/ZmE1OGFjZWZhZWRk/YWJjMC5qcGc.jpg"/>
      <itunes:duration>2782</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/benbland/">Ben Bland</a> expressively explores emotive AI’s shaky scientific underpinnings, the gap between reality and perception, popular applications, and critical apprehensions. </p><p>Ben exposes the scientific contention surrounding human emotion. He talks terms (emotive? empathic? not telepathic!) and outlines a spectrum of emotive applications. We discuss the powerful, often subtle, and sometimes insidious ways emotion can be leveraged. Ben explains the negative effects of perpetual positivity and why drawing clear red lines around the tech is difficult. </p><p>He also addresses the qualitative sea change brought about by large language models (LLMs), implicit vs explicit design and commercial objectives. Noting that the social and psychological impacts of emotive AI systems have been poorly explored, he muses about the potential to actively evolve your machine’s emotional capability. </p><p>Ben confronts the challenges of defining standards when the language is tricky, the science is shaky, and applications are proliferating. Lastly, Ben jazzes up empathy as a human superpower. While optimistic about empathic AI’s potential, he counsels proceeding with caution. </p><p><a href="https://www.linkedin.com/in/benbland/">Ben Bland</a> is an independent consultant in ethical innovation. An active community contributor, Ben is the Chair of the <a href="https://standards.ieee.org/ieee/7014/7648/?trk=public_post_comment-text">IEEE P7014 </a>Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems and Vice-Chair of <a href="https://standards.ieee.org/ieee/7014.1/11609/">IEEE P7014.1</a> Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep57/transcript">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/ben-bland" img="https://img.transistorcdn.com/B9F_DtgKXaqWTvFGh5ZEQ_dqhVUDWe-guYNYl9MPDdo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hYzJl/NDc1OWFmMTJkMzNi/YWQwZTg3NjVhOWQx/OGMzNi5qcGc.jpg">Ben Bland</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/6a9cf2ab/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>RAGging on Graphs with Philip Rathle</title>
      <itunes:episode>56</itunes:episode>
      <podcast:episode>56</podcast:episode>
      <itunes:title>RAGging on Graphs with Philip Rathle</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">8d147ff8-ff97-4544-945b-ed7ce9db3652</guid>
      <link>https://share.transistor.fm/s/1cec48df</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/prathle/">Philip Rathle</a> traverses from knowledge graphs to LLMs and illustrates how loading the dice with GraphRAG enhances deterministic reasoning, explainability and agency.    </p><p>Philip explains why knowledge graphs are a natural fit for capturing data about real-world systems. Starting with Kevin Bacon, he identifies many ‘graphy’ problems confronting us today. Philip then describes how interconnected systems benefit from the dynamism and data network effects afforded by knowledge graphs. </p><p>Next, Philip provides a primer on how Retrieval Augmented Generation (RAG) loads the dice for large language models (LLMs). He also differentiates between vector- and graph-based RAG. Along the way, we discuss the nature and locus of reasoning (or lack thereof) in LLM systems. Philip articulates the benefits of GraphRAG including deterministic reasoning, fine-grained access control and explainability. He also ruminates on graphs as a bridge to human agency as graphs can be reasoned on by both humans and machines. Lastly, Philip shares what is happening now and next in GraphRAG applications and beyond. </p><p><a href="https://www.linkedin.com/in/prathle/">Philip Rathle</a> is the Chief Technology Officer (CTO) at <a href="https://neo4j.com/">Neo4j</a>. Philip was a key contributor to the development of the <a href="https://www.gqlstandards.org/">GQL standard</a> and recently authored <a href="https://neo4j.com/blog/graphrag-manifesto/">The GraphRAG Manifesto: Adding Knowledge to </a>GenAI (neo4j.com) a go-to resource for all things GraphRAG. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep56/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/prathle/">Philip Rathle</a> traverses from knowledge graphs to LLMs and illustrates how loading the dice with GraphRAG enhances deterministic reasoning, explainability and agency.    </p><p>Philip explains why knowledge graphs are a natural fit for capturing data about real-world systems. Starting with Kevin Bacon, he identifies many ‘graphy’ problems confronting us today. Philip then describes how interconnected systems benefit from the dynamism and data network effects afforded by knowledge graphs. </p><p>Next, Philip provides a primer on how Retrieval Augmented Generation (RAG) loads the dice for large language models (LLMs). He also differentiates between vector- and graph-based RAG. Along the way, we discuss the nature and locus of reasoning (or lack thereof) in LLM systems. Philip articulates the benefits of GraphRAG including deterministic reasoning, fine-grained access control and explainability. He also ruminates on graphs as a bridge to human agency as graphs can be reasoned on by both humans and machines. Lastly, Philip shares what is happening now and next in GraphRAG applications and beyond. </p><p><a href="https://www.linkedin.com/in/prathle/">Philip Rathle</a> is the Chief Technology Officer (CTO) at <a href="https://neo4j.com/">Neo4j</a>. Philip was a key contributor to the development of the <a href="https://www.gqlstandards.org/">GQL standard</a> and recently authored <a href="https://neo4j.com/blog/graphrag-manifesto/">The GraphRAG Manifesto: Adding Knowledge to </a>GenAI (neo4j.com) a go-to resource for all things GraphRAG. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep56/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 28 Aug 2024 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/1cec48df/53cdb307.mp3" length="72070056" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/wYb6NEVAze8K76yhpeEtqjzlIO8d1XFpYQ-_VhABB8E/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wNmM0/ODgxNDIzMjY0YmEy/ZmVjZDcxNzAyNmE1/MGZjNS5qcGc.jpg"/>
      <itunes:duration>2973</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/prathle/">Philip Rathle</a> traverses from knowledge graphs to LLMs and illustrates how loading the dice with GraphRAG enhances deterministic reasoning, explainability and agency.    </p><p>Philip explains why knowledge graphs are a natural fit for capturing data about real-world systems. Starting with Kevin Bacon, he identifies many ‘graphy’ problems confronting us today. Philip then describes how interconnected systems benefit from the dynamism and data network effects afforded by knowledge graphs. </p><p>Next, Philip provides a primer on how Retrieval Augmented Generation (RAG) loads the dice for large language models (LLMs). He also differentiates between vector- and graph-based RAG. Along the way, we discuss the nature and locus of reasoning (or lack thereof) in LLM systems. Philip articulates the benefits of GraphRAG including deterministic reasoning, fine-grained access control and explainability. He also ruminates on graphs as a bridge to human agency as graphs can be reasoned on by both humans and machines. Lastly, Philip shares what is happening now and next in GraphRAG applications and beyond. </p><p><a href="https://www.linkedin.com/in/prathle/">Philip Rathle</a> is the Chief Technology Officer (CTO) at <a href="https://neo4j.com/">Neo4j</a>. Philip was a key contributor to the development of the <a href="https://www.gqlstandards.org/">GQL standard</a> and recently authored <a href="https://neo4j.com/blog/graphrag-manifesto/">The GraphRAG Manifesto: Adding Knowledge to </a>GenAI (neo4j.com) a go-to resource for all things GraphRAG. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep56/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/philip-rathle" img="https://img.transistorcdn.com/FNAoTXyRcHTZKLscfR0hY8dj6hkoX_va2BeODKMd32E/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mOTY4/YmZkMGY2YTUxZmI2/MTAwZWNjNDA1Zjll/MThiMi5qcGc.jpg">Philip Rathle</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/1cec48df/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Working with AI with Matthew Scherer</title>
      <itunes:episode>55</itunes:episode>
      <podcast:episode>55</podcast:episode>
      <itunes:title>Working with AI with Matthew Scherer</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2602a668-1884-4c6a-9eaf-bad40185e56d</guid>
      <link>https://share.transistor.fm/s/1dc1160f</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/matthewus/">Matthew Scherer</a> makes the case for bottom-up AI adoption, being OK with not using AI, innovation as a relative good, and transparently safeguarding workers’ rights. </p><p>Matthew champions a worker-led approach to AI adoption in the workplace. He traverses the slippery slope from safety to surveillance and guards against unnecessarily intrusive solutions. </p><p>Matthew then illustrates why AI isn’t great at making employment decisions; even in objectively data rich environments such as the NBA. He also addresses the intractable problem of bias in hiring and flawed comparisons between humans and AI. We discuss the unquantifiable dynamics of human interactions and being OK with our inability to automate hiring and firing. </p><p>Matthew explains how the patchwork of emerging privacy regulations reflects cultural norms towards workers. He invokes the Ford Pinto and the Titan submersible catastrophe when challenging the concept of innovation as an intrinsic good. Matthew then makes the case for transparency as a gateway to enforcing existing civil rights and laws. </p><p><a href="https://www.linkedin.com/in/matthewus/">Matthew Scherer</a> is a Senior Policy Counsel for Workers' Rights and Technology at the <a href="https://cdt.org/">Center for Democracy and Technology</a> (CDT). He studies how emerging technologies affect workers in the workplace and labor market.   Matt is also an Advisor for the <a href="https://icaad.ngo/">International Center for Advocates Against Discrimination</a>. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep55/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/matthewus/">Matthew Scherer</a> makes the case for bottom-up AI adoption, being OK with not using AI, innovation as a relative good, and transparently safeguarding workers’ rights. </p><p>Matthew champions a worker-led approach to AI adoption in the workplace. He traverses the slippery slope from safety to surveillance and guards against unnecessarily intrusive solutions. </p><p>Matthew then illustrates why AI isn’t great at making employment decisions; even in objectively data rich environments such as the NBA. He also addresses the intractable problem of bias in hiring and flawed comparisons between humans and AI. We discuss the unquantifiable dynamics of human interactions and being OK with our inability to automate hiring and firing. </p><p>Matthew explains how the patchwork of emerging privacy regulations reflects cultural norms towards workers. He invokes the Ford Pinto and the Titan submersible catastrophe when challenging the concept of innovation as an intrinsic good. Matthew then makes the case for transparency as a gateway to enforcing existing civil rights and laws. </p><p><a href="https://www.linkedin.com/in/matthewus/">Matthew Scherer</a> is a Senior Policy Counsel for Workers' Rights and Technology at the <a href="https://cdt.org/">Center for Democracy and Technology</a> (CDT). He studies how emerging technologies affect workers in the workplace and labor market.   Matt is also an Advisor for the <a href="https://icaad.ngo/">International Center for Advocates Against Discrimination</a>. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep55/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 14 Aug 2024 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/1dc1160f/e9884936.mp3" length="85761661" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/HgnZpymesDfh8PNUy7IMUCVRNElSR9f92KXpRvvFnrU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80OGJm/MTAxNTNjYjhmYjVh/MTQ0YmNiODg1MDdh/NWIyZS5qcGc.jpg"/>
      <itunes:duration>3530</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/matthewus/">Matthew Scherer</a> makes the case for bottom-up AI adoption, being OK with not using AI, innovation as a relative good, and transparently safeguarding workers’ rights. </p><p>Matthew champions a worker-led approach to AI adoption in the workplace. He traverses the slippery slope from safety to surveillance and guards against unnecessarily intrusive solutions. </p><p>Matthew then illustrates why AI isn’t great at making employment decisions; even in objectively data rich environments such as the NBA. He also addresses the intractable problem of bias in hiring and flawed comparisons between humans and AI. We discuss the unquantifiable dynamics of human interactions and being OK with our inability to automate hiring and firing. </p><p>Matthew explains how the patchwork of emerging privacy regulations reflects cultural norms towards workers. He invokes the Ford Pinto and the Titan submersible catastrophe when challenging the concept of innovation as an intrinsic good. Matthew then makes the case for transparency as a gateway to enforcing existing civil rights and laws. </p><p><a href="https://www.linkedin.com/in/matthewus/">Matthew Scherer</a> is a Senior Policy Counsel for Workers' Rights and Technology at the <a href="https://cdt.org/">Center for Democracy and Technology</a> (CDT). He studies how emerging technologies affect workers in the workplace and labor market.   Matt is also an Advisor for the <a href="https://icaad.ngo/">International Center for Advocates Against Discrimination</a>. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep55/transcript">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/matthew-scherer" img="https://img.transistorcdn.com/Myq8nWIyjAiXt52KckFYbbXoPPBGalG_NVs8uqBMeFY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84ZTk5/MjZkMjEzZWY2MTM1/ZDdmNGE1MDE5MWRh/YWFjOS5qcGVn.jpg">Matthew Scherer</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/1dc1160f/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Chief Data Concerns with Heidi Lanford</title>
      <itunes:episode>54</itunes:episode>
      <podcast:episode>54</podcast:episode>
      <itunes:title>Chief Data Concerns with Heidi Lanford</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">cf3a03f0-2cb1-43e6-a2ce-e556587c3220</guid>
      <link>https://share.transistor.fm/s/49241559</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/heidilanford/">Heidi Lanford</a> connects data to cocktails and campaigns while considering the nature of data disruption, getting from analytics to AI, and using data with confidence.</p><p>Heidi studied mathematics and statistics and never looked back. Reflecting on analytics then and now, she confirms the appetite for data has never been higher. Yet adoption, momentum and focus remain evergreen barriers. Heidi issues a cocktail party challenge while discussing the core competencies of effective data leaders.</p><p>Heidi believes data and CDOs are disruptive by nature. But this only matters if your business incentives are properly aligned. She revels in agile experimentation while counseling that speed is not enough. We discuss how good old-fashioned analytics put the right pressure on the foundational data needed for AI. </p><p>Heidi then campaigns for endemic data literacy. Along the way she pans JIT holiday training and promotes confident decision making as the metric that matters. Never saying never, Heidi celebrates human experts and the spotlight AI is shining on data.</p><p><a href="https://www.linkedin.com/in/heidilanford/">Heidi Lanford</a> is a Global Chief Data &amp; Analytics Officer who has served as Chief Data Officer (CDO) at the Fitch Group and VP of Enterprise Data &amp; Analytics at Red Hat (IBM). In 2023, Heidi co-founded two AI startups <a href="https://livefire.ai/">LiveFire AI</a> and AIQScore. Heidi serves as a Board Member at the University of Virginia School of Data Science, is a Founding Board Member of the <a href="https://www.dataleadershipcollaborative.com/">Data Leadership Collaborative</a>, and an Advisor to Domino Data Labs and Linea. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep54/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/heidilanford/">Heidi Lanford</a> connects data to cocktails and campaigns while considering the nature of data disruption, getting from analytics to AI, and using data with confidence.</p><p>Heidi studied mathematics and statistics and never looked back. Reflecting on analytics then and now, she confirms the appetite for data has never been higher. Yet adoption, momentum and focus remain evergreen barriers. Heidi issues a cocktail party challenge while discussing the core competencies of effective data leaders.</p><p>Heidi believes data and CDOs are disruptive by nature. But this only matters if your business incentives are properly aligned. She revels in agile experimentation while counseling that speed is not enough. We discuss how good old-fashioned analytics put the right pressure on the foundational data needed for AI. </p><p>Heidi then campaigns for endemic data literacy. Along the way she pans JIT holiday training and promotes confident decision making as the metric that matters. Never saying never, Heidi celebrates human experts and the spotlight AI is shining on data.</p><p><a href="https://www.linkedin.com/in/heidilanford/">Heidi Lanford</a> is a Global Chief Data &amp; Analytics Officer who has served as Chief Data Officer (CDO) at the Fitch Group and VP of Enterprise Data &amp; Analytics at Red Hat (IBM). In 2023, Heidi co-founded two AI startups <a href="https://livefire.ai/">LiveFire AI</a> and AIQScore. Heidi serves as a Board Member at the University of Virginia School of Data Science, is a Founding Board Member of the <a href="https://www.dataleadershipcollaborative.com/">Data Leadership Collaborative</a>, and an Advisor to Domino Data Labs and Linea. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep54/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 03 Jul 2024 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/49241559/2c92e8c9.mp3" length="72118564" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/CWVcQNAWNyINISK8e1ZP4ittJL4I1C348XoX53zle2g/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xZmEz/M2Y3YThkYmI0NTg4/NTRjZGE2YjBjM2Uw/MDc5OS5qcGc.jpg"/>
      <itunes:duration>2973</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/heidilanford/">Heidi Lanford</a> connects data to cocktails and campaigns while considering the nature of data disruption, getting from analytics to AI, and using data with confidence.</p><p>Heidi studied mathematics and statistics and never looked back. Reflecting on analytics then and now, she confirms the appetite for data has never been higher. Yet adoption, momentum and focus remain evergreen barriers. Heidi issues a cocktail party challenge while discussing the core competencies of effective data leaders.</p><p>Heidi believes data and CDOs are disruptive by nature. But this only matters if your business incentives are properly aligned. She revels in agile experimentation while counseling that speed is not enough. We discuss how good old-fashioned analytics put the right pressure on the foundational data needed for AI. </p><p>Heidi then campaigns for endemic data literacy. Along the way she pans JIT holiday training and promotes confident decision making as the metric that matters. Never saying never, Heidi celebrates human experts and the spotlight AI is shining on data.</p><p><a href="https://www.linkedin.com/in/heidilanford/">Heidi Lanford</a> is a Global Chief Data &amp; Analytics Officer who has served as Chief Data Officer (CDO) at the Fitch Group and VP of Enterprise Data &amp; Analytics at Red Hat (IBM). In 2023, Heidi co-founded two AI startups <a href="https://livefire.ai/">LiveFire AI</a> and AIQScore. Heidi serves as a Board Member at the University of Virginia School of Data Science, is a Founding Board Member of the <a href="https://www.dataleadershipcollaborative.com/">Data Leadership Collaborative</a>, and an Advisor to Domino Data Labs and Linea. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep54/transcript">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/heidi-lanford" img="https://img.transistorcdn.com/GnU9xBe9ALf7eN2ktzLXwL8lywm_GwGvhmLjtuPgvA0/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zNjMx/NDYxZjliY2U0NmY5/YTI0NTZhZWYzMjAy/OWJmMi5KUEc.jpg">Heidi Lanford</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/49241559/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Ethical Control and Trust with Marianna B. Ganapini</title>
      <itunes:episode>53</itunes:episode>
      <podcast:episode>53</podcast:episode>
      <itunes:title>Ethical Control and Trust with Marianna B. Ganapini</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b6c13dd6-3dba-4ba4-a597-2b5c42299b5a</guid>
      <link>https://share.transistor.fm/s/78ba11c3</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/marianna-b-ganapini-769624116/">Marianna B. Ganapini</a> contemplates AI nudging, entropy as a bellwether of risk, accessible ethical assessment, ethical ROI, the limits of trust and irrational beliefs. </p><p>Marianna studies how AI-driven nudging ups the ethical ante relative to autonomy and decision-making. This is a solvable problem that may still prove difficult to regulate. She posits that the level of entropy within a system correlates with risks seen and unseen. We discuss the relationship between risk and harm and why a lack of knowledge imbues moral responsibility. Marianna describes how macro-level assessments can effectively take an AI system’s temperature (risk-wise). Addressing the evolving responsible AI discourse, Marianna asserts that limiting trust to moral agents is overly restrictive. The real problem is conflating trust between humans with the trust afforded any number of entities from your pet to your Roomba. Marianna also cautions against hastily judging another’s beliefs, even when they overhype AI. Acknowledging progress, Marianna advocates for increased interdisciplinary efforts and ethical certifications. </p><p><a href="https://www.linkedin.com/in/marianna-b-ganapini-769624116/">Marianna B. Ganapini</a> is a Professor of Philosophy and Founder of <a href="https://logicanow.com/">Logica.Now</a>, a consultancy which seeks to educate and engage organizations in ethical AI inquiry. She is also a Faculty Director at the <a href="https://montrealethics.ai/">Montreal AI Ethics Institute </a>and Visiting Scholar at the <a href="https://techethicslab.nd.edu/">ND-IBM Tech Ethics Lab</a> .  </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep53/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/marianna-b-ganapini-769624116/">Marianna B. Ganapini</a> contemplates AI nudging, entropy as a bellwether of risk, accessible ethical assessment, ethical ROI, the limits of trust and irrational beliefs. </p><p>Marianna studies how AI-driven nudging ups the ethical ante relative to autonomy and decision-making. This is a solvable problem that may still prove difficult to regulate. She posits that the level of entropy within a system correlates with risks seen and unseen. We discuss the relationship between risk and harm and why a lack of knowledge imbues moral responsibility. Marianna describes how macro-level assessments can effectively take an AI system’s temperature (risk-wise). Addressing the evolving responsible AI discourse, Marianna asserts that limiting trust to moral agents is overly restrictive. The real problem is conflating trust between humans with the trust afforded any number of entities from your pet to your Roomba. Marianna also cautions against hastily judging another’s beliefs, even when they overhype AI. Acknowledging progress, Marianna advocates for increased interdisciplinary efforts and ethical certifications. </p><p><a href="https://www.linkedin.com/in/marianna-b-ganapini-769624116/">Marianna B. Ganapini</a> is a Professor of Philosophy and Founder of <a href="https://logicanow.com/">Logica.Now</a>, a consultancy which seeks to educate and engage organizations in ethical AI inquiry. She is also a Faculty Director at the <a href="https://montrealethics.ai/">Montreal AI Ethics Institute </a>and Visiting Scholar at the <a href="https://techethicslab.nd.edu/">ND-IBM Tech Ethics Lab</a> .  </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep53/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 19 Jun 2024 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/78ba11c3/a35f1b33.mp3" length="85615013" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/k4-1A8Z2td1VyZ5S78sDYS82KdEo9Zow72bHJ0nFMOc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lN2U0/MWVlNjEyMjI3Y2Qz/YjYwYmQ1YzBlMGQz/MmE5YS5qcGc.jpg"/>
      <itunes:duration>3521</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/marianna-b-ganapini-769624116/">Marianna B. Ganapini</a> contemplates AI nudging, entropy as a bellwether of risk, accessible ethical assessment, ethical ROI, the limits of trust and irrational beliefs. </p><p>Marianna studies how AI-driven nudging ups the ethical ante relative to autonomy and decision-making. This is a solvable problem that may still prove difficult to regulate. She posits that the level of entropy within a system correlates with risks seen and unseen. We discuss the relationship between risk and harm and why a lack of knowledge imbues moral responsibility. Marianna describes how macro-level assessments can effectively take an AI system’s temperature (risk-wise). Addressing the evolving responsible AI discourse, Marianna asserts that limiting trust to moral agents is overly restrictive. The real problem is conflating trust between humans with the trust afforded any number of entities from your pet to your Roomba. Marianna also cautions against hastily judging another’s beliefs, even when they overhype AI. Acknowledging progress, Marianna advocates for increased interdisciplinary efforts and ethical certifications. </p><p><a href="https://www.linkedin.com/in/marianna-b-ganapini-769624116/">Marianna B. Ganapini</a> is a Professor of Philosophy and Founder of <a href="https://logicanow.com/">Logica.Now</a>, a consultancy which seeks to educate and engage organizations in ethical AI inquiry. She is also a Faculty Director at the <a href="https://montrealethics.ai/">Montreal AI Ethics Institute </a>and Visiting Scholar at the <a href="https://techethicslab.nd.edu/">ND-IBM Tech Ethics Lab</a> .  </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep53/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/marianna-b-ganapini" img="https://img.transistorcdn.com/5m5VwQUarLGh1bCV83fDuB_H7cBUuv54LHMotzxT4iE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iYWZm/MzE0YmNjZjk4MDFh/YmUwZjZlMDg5Nzli/ZGQ4YS5qcGVn.jpg">Marianna B. Ganapini</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/78ba11c3/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Policy and Practice with Miriam Vogel</title>
      <itunes:episode>52</itunes:episode>
      <podcast:episode>52</podcast:episode>
      <itunes:title>Policy and Practice with Miriam Vogel</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">10114193-60c7-4bf0-b146-955efc873abe</guid>
      <link>https://share.transistor.fm/s/6c2f53f6</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/miriamvogelai/">Miriam Vogel</a> disputes AI is lawless, endorses good AI hygiene, reviews regulatory progress and pitfalls, boosts literacy and diversity, and remains net positive on AI. </p><p><a href="https://www.linkedin.com/in/miriamvogelai/">Miriam Vogel</a> traverses her unforeseen path from in-house counsel to public policy innovator. Miriam acknowledges that AI systems raise some novel questions but reiterates there is much to learn from existing policies and laws. Drawing analogies to flying and driving, Miriam demonstrates the need for both standardized and context-specific guidance.  </p><p>Miriam and Kimberly then discuss what constitutes good AI hygiene, what meaningful transparency looks like, and why a multi-disciplinary mindset matters. While reiterating the business value of beneficial AI Miriam notes businesses are now on notice regarding their AI liability. She is clear-sighted regarding the complexity, but views regulation done right as a means to spur innovation and trust. In that vein, Miriam outlines the progress to-date and work still to come to enact federal AI policies and raise our collective AI literacy. Lastly, Miriam raises questions everyone should ask to ensure we each benefit from the opportunities AI presents. </p><p><a href="https://www.linkedin.com/in/miriamvogelai/">Miriam Vogel</a> is the President and CEO of <a href="https://www.equalai.org/">Equal AI</a>, a non-profit movement committed to reducing bias and responsibly governing AI. Miriam also chairs the US National AI Advisory Committee (<a href="https://ai.gov/naiac/">NAIAC</a>). </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep52/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/miriamvogelai/">Miriam Vogel</a> disputes AI is lawless, endorses good AI hygiene, reviews regulatory progress and pitfalls, boosts literacy and diversity, and remains net positive on AI. </p><p><a href="https://www.linkedin.com/in/miriamvogelai/">Miriam Vogel</a> traverses her unforeseen path from in-house counsel to public policy innovator. Miriam acknowledges that AI systems raise some novel questions but reiterates there is much to learn from existing policies and laws. Drawing analogies to flying and driving, Miriam demonstrates the need for both standardized and context-specific guidance.  </p><p>Miriam and Kimberly then discuss what constitutes good AI hygiene, what meaningful transparency looks like, and why a multi-disciplinary mindset matters. While reiterating the business value of beneficial AI Miriam notes businesses are now on notice regarding their AI liability. She is clear-sighted regarding the complexity, but views regulation done right as a means to spur innovation and trust. In that vein, Miriam outlines the progress to-date and work still to come to enact federal AI policies and raise our collective AI literacy. Lastly, Miriam raises questions everyone should ask to ensure we each benefit from the opportunities AI presents. </p><p><a href="https://www.linkedin.com/in/miriamvogelai/">Miriam Vogel</a> is the President and CEO of <a href="https://www.equalai.org/">Equal AI</a>, a non-profit movement committed to reducing bias and responsibly governing AI. Miriam also chairs the US National AI Advisory Committee (<a href="https://ai.gov/naiac/">NAIAC</a>). </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep52/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 05 Jun 2024 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/6c2f53f6/3ff6268d.mp3" length="51601650" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/ov5BjvG7x16TBYTsmNfR_ubV-JKJi4qzElXPY8wbIK0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83OGFh/ZTRhOWUwMTZjMWNh/ZmE2OTBmMDFmNGE3/OWYwZS5qcGc.jpg"/>
      <itunes:duration>2027</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/miriamvogelai/">Miriam Vogel</a> disputes AI is lawless, endorses good AI hygiene, reviews regulatory progress and pitfalls, boosts literacy and diversity, and remains net positive on AI. </p><p><a href="https://www.linkedin.com/in/miriamvogelai/">Miriam Vogel</a> traverses her unforeseen path from in-house counsel to public policy innovator. Miriam acknowledges that AI systems raise some novel questions but reiterates there is much to learn from existing policies and laws. Drawing analogies to flying and driving, Miriam demonstrates the need for both standardized and context-specific guidance.  </p><p>Miriam and Kimberly then discuss what constitutes good AI hygiene, what meaningful transparency looks like, and why a multi-disciplinary mindset matters. While reiterating the business value of beneficial AI Miriam notes businesses are now on notice regarding their AI liability. She is clear-sighted regarding the complexity, but views regulation done right as a means to spur innovation and trust. In that vein, Miriam outlines the progress to-date and work still to come to enact federal AI policies and raise our collective AI literacy. Lastly, Miriam raises questions everyone should ask to ensure we each benefit from the opportunities AI presents. </p><p><a href="https://www.linkedin.com/in/miriamvogelai/">Miriam Vogel</a> is the President and CEO of <a href="https://www.equalai.org/">Equal AI</a>, a non-profit movement committed to reducing bias and responsibly governing AI. Miriam also chairs the US National AI Advisory Committee (<a href="https://ai.gov/naiac/">NAIAC</a>). </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep52/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/miriam-vogel" img="https://img.transistorcdn.com/dCeqBJ9heJcL1jfxzt-GZdynANUTqSRb4pTZ9LuO108/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81YTk3/ZTU0OTljMTRjN2Qy/OWE5MDI2NDMyN2M5/MjMxZi5wbmc.jpg">Miriam Vogel</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/6c2f53f6/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Learning to Unlearn with Melissa Sariffodeen</title>
      <itunes:episode>51</itunes:episode>
      <podcast:episode>51</podcast:episode>
      <itunes:title>Learning to Unlearn with Melissa Sariffodeen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b1671942-2aae-431b-8d83-a85bf630cfba</guid>
      <link>https://share.transistor.fm/s/5486fd78</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/melsariffodeen/">Melissa Sariffodeen</a> contends learning requires unlearning, ponders human-AI relationships, prioritizes outcomes over outputs, and values the disquiet of constructive critique. </p><p>Melissa artfully illustrates barriers to innovation through the eyes of a child learning to code and a seasoned driver learning to not drive. Drawing on decades of experience teaching technical skills, she identifies why AI creates new challenges for upskilling. Kimberly and Melissa then debate viewing AI systems through the lens of tools vs. relationships. An avowed lifelong learner, Melissa believes prior learnings are sometimes detrimental to innovation. Melissa therefore advocates for unlearning as a key step in unlocking growth. She also proposes a new model for organizational learning and development. A pragmatic tech optimist, Melissa acknowledges the messy middle and reaffirms the importance of diversity and critically questioning our beliefs and habits.</p><p><a href="https://www.linkedin.com/in/melsariffodeen/">Melissa Sariffodeen</a> is the founder of the <a href="https://melissasariffodeen.com/lab">The Digital Potential Lab</a>, co-founder and CEO of <a href="https://www.canadalearningcode.ca/">Canada Learning Code</a> and a Professor at the Ivey Business School at Western University where she focuses on the management of information and communication technologies.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep51/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/melsariffodeen/">Melissa Sariffodeen</a> contends learning requires unlearning, ponders human-AI relationships, prioritizes outcomes over outputs, and values the disquiet of constructive critique. </p><p>Melissa artfully illustrates barriers to innovation through the eyes of a child learning to code and a seasoned driver learning to not drive. Drawing on decades of experience teaching technical skills, she identifies why AI creates new challenges for upskilling. Kimberly and Melissa then debate viewing AI systems through the lens of tools vs. relationships. An avowed lifelong learner, Melissa believes prior learnings are sometimes detrimental to innovation. Melissa therefore advocates for unlearning as a key step in unlocking growth. She also proposes a new model for organizational learning and development. A pragmatic tech optimist, Melissa acknowledges the messy middle and reaffirms the importance of diversity and critically questioning our beliefs and habits.</p><p><a href="https://www.linkedin.com/in/melsariffodeen/">Melissa Sariffodeen</a> is the founder of the <a href="https://melissasariffodeen.com/lab">The Digital Potential Lab</a>, co-founder and CEO of <a href="https://www.canadalearningcode.ca/">Canada Learning Code</a> and a Professor at the Ivey Business School at Western University where she focuses on the management of information and communication technologies.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep51/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 22 May 2024 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/5486fd78/0d41d98a.mp3" length="56590164" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/SplswyqRHbvsG920C6bJ8l1Q8V2wPgnjFn_EOmdGYoM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wOGY3/ZDJkNTI2MjJhZTZj/Y2Q4ZTMwMzc1MTY4/NTIwYS5qcGc.jpg"/>
      <itunes:duration>2350</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/melsariffodeen/">Melissa Sariffodeen</a> contends learning requires unlearning, ponders human-AI relationships, prioritizes outcomes over outputs, and values the disquiet of constructive critique. </p><p>Melissa artfully illustrates barriers to innovation through the eyes of a child learning to code and a seasoned driver learning to not drive. Drawing on decades of experience teaching technical skills, she identifies why AI creates new challenges for upskilling. Kimberly and Melissa then debate viewing AI systems through the lens of tools vs. relationships. An avowed lifelong learner, Melissa believes prior learnings are sometimes detrimental to innovation. Melissa therefore advocates for unlearning as a key step in unlocking growth. She also proposes a new model for organizational learning and development. A pragmatic tech optimist, Melissa acknowledges the messy middle and reaffirms the importance of diversity and critically questioning our beliefs and habits.</p><p><a href="https://www.linkedin.com/in/melsariffodeen/">Melissa Sariffodeen</a> is the founder of the <a href="https://melissasariffodeen.com/lab">The Digital Potential Lab</a>, co-founder and CEO of <a href="https://www.canadalearningcode.ca/">Canada Learning Code</a> and a Professor at the Ivey Business School at Western University where she focuses on the management of information and communication technologies.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep51/transcript">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/melissa-sariffodeen" img="https://img.transistorcdn.com/jcmLwEAtTh7J7aD3N2VG_phtIZiI6NvEpCoRMJIFb_Y/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZTZj/YjM0OWVlZDJiYTU5/NjNlZDIxOGIyMTA3/ODVmNi5qcGc.jpg">Melissa Sariffodeen</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/5486fd78/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Power of Inquiry with Shannon Mullen O’Keefe</title>
      <itunes:episode>50</itunes:episode>
      <podcast:episode>50</podcast:episode>
      <itunes:title>The Power of Inquiry with Shannon Mullen O’Keefe</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2b73ec89-7a5f-4aba-98db-3f6b47e9a7d4</guid>
      <link>https://share.transistor.fm/s/37cf231f</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/shannonmullenokeefe/">Shannon Mullen O’Keefe</a> champions collaboration, serendipitous discovery, curious conversations, ethical leadership, and purposeful curation of our technical creations.    </p><p>Shannon shares her professional journey from curating leaders to innovative ideas. From lightbulbs to online dating and AI voice technology, Shannon highlights the simultaneously beautiful and nefarious applications of tech and the need to assess our creations continuously and critically. She highlights powerful insights spurred by the values and questions posed in the book <em>10 Moral Questions: How to Design Tech and AI Responsibly. </em>We discuss the ‘business of business,’ consumer appetite for ethical businesses, and why conversation is the bedrock of culture. Throughout, Shannon highlights the importance and joy of discovery, embracing nature, sitting in darkness, and mustering the will to change our minds, even if that means turning our creations off. </p><p><a href="https://www.linkedin.com/in/shannonmullenokeefe/">Shannon Mullen O’Keefe</a> is the Curator of the <a href="https://www.shannonmullenokeefe.com/">Museum of Ideas</a> and co-author of the Q Collective’s book <em>10 Moral Questions: How to Design Tech and AI Responsibly</em>. Learn more at <a href="https://www.10moralquestions.com/">https://www.10moralquestions.com/</a>. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep50/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/shannonmullenokeefe/">Shannon Mullen O’Keefe</a> champions collaboration, serendipitous discovery, curious conversations, ethical leadership, and purposeful curation of our technical creations.    </p><p>Shannon shares her professional journey from curating leaders to innovative ideas. From lightbulbs to online dating and AI voice technology, Shannon highlights the simultaneously beautiful and nefarious applications of tech and the need to assess our creations continuously and critically. She highlights powerful insights spurred by the values and questions posed in the book <em>10 Moral Questions: How to Design Tech and AI Responsibly. </em>We discuss the ‘business of business,’ consumer appetite for ethical businesses, and why conversation is the bedrock of culture. Throughout, Shannon highlights the importance and joy of discovery, embracing nature, sitting in darkness, and mustering the will to change our minds, even if that means turning our creations off. </p><p><a href="https://www.linkedin.com/in/shannonmullenokeefe/">Shannon Mullen O’Keefe</a> is the Curator of the <a href="https://www.shannonmullenokeefe.com/">Museum of Ideas</a> and co-author of the Q Collective’s book <em>10 Moral Questions: How to Design Tech and AI Responsibly</em>. Learn more at <a href="https://www.10moralquestions.com/">https://www.10moralquestions.com/</a>. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep50/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 01 May 2024 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/37cf231f/c093934a.mp3" length="44157912" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/pNwF_PzM_bJ30EXw-9IJaSwg0Hq1HcjwsxklhAuHdwM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84NTZm/NDEwMGZkNDM3OTdm/YTdlYzIxOTQ1MjVm/ZjJjZi5qcGc.jpg"/>
      <itunes:duration>1840</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/shannonmullenokeefe/">Shannon Mullen O’Keefe</a> champions collaboration, serendipitous discovery, curious conversations, ethical leadership, and purposeful curation of our technical creations.    </p><p>Shannon shares her professional journey from curating leaders to innovative ideas. From lightbulbs to online dating and AI voice technology, Shannon highlights the simultaneously beautiful and nefarious applications of tech and the need to assess our creations continuously and critically. She highlights powerful insights spurred by the values and questions posed in the book <em>10 Moral Questions: How to Design Tech and AI Responsibly. </em>We discuss the ‘business of business,’ consumer appetite for ethical businesses, and why conversation is the bedrock of culture. Throughout, Shannon highlights the importance and joy of discovery, embracing nature, sitting in darkness, and mustering the will to change our minds, even if that means turning our creations off. </p><p><a href="https://www.linkedin.com/in/shannonmullenokeefe/">Shannon Mullen O’Keefe</a> is the Curator of the <a href="https://www.shannonmullenokeefe.com/">Museum of Ideas</a> and co-author of the Q Collective’s book <em>10 Moral Questions: How to Design Tech and AI Responsibly</em>. Learn more at <a href="https://www.10moralquestions.com/">https://www.10moralquestions.com/</a>. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep50/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/shannon-mullen-o-keefe" img="https://img.transistorcdn.com/G98oNEVyVNmMp9iSPy32tDraKlARqW7tNADGZO0S2WU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82OGY3/NmFiMTFkYWM3NDA5/NzFjOThhMzIwMjRk/NDE4Yi5qcGc.jpg">Shannon Mullen O’Keefe</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/37cf231f/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The AI Experience with Sarah Gibbons and Kate Moran </title>
      <itunes:episode>49</itunes:episode>
      <podcast:episode>49</podcast:episode>
      <itunes:title>The AI Experience with Sarah Gibbons and Kate Moran </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">eb04ec79-e32c-41c8-88a8-0afbf3196705</guid>
      <link>https://share.transistor.fm/s/285b2380</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/sarahegibbons/">Sarah Gibbons</a> and <a href="https://www.linkedin.com/in/kate-m-moran/">Kate Moran</a> riff on the experience of using current AI tools, how AI systems may change our behavior and the application of AI to human-centered design.   </p><p>Sarah and Kate share their non-linear paths to becoming leading user experience (UX) designers. Defining the human-centric mindset Sarah stresses that intent is design and we are all designers. Kate and Sarah then challenge teams to resist short-term problem hunting for AI alone. This leads to an energized and frank debate about the tensions created by broad availability of AI tools with “shitty” user interfaces, why conversational interfaces aren’t the be-all-end-all and whether calls for more discernment and critical thinking are reasonable or even new. Kate and Sara then discuss their research into our nascent AI mental models and emergent impacts on user behavior. Kate discusses how AI can be used for UX design along with some far-fetched claims. Finally, both Kate and Sara share exciting areas of ongoing research.  </p><p><a href="https://www.linkedin.com/in/sarahegibbons/">Sarah Gibbons</a> and <a href="https://www.linkedin.com/in/kate-m-moran/">Kate Moran</a> are Vice Presidents at <a href="https://www.nngroup.com/">Nielson Norman Group</a> where they lead strategy, research, and design in the areas of human-centered design and user experience (UX). </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep49/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/sarahegibbons/">Sarah Gibbons</a> and <a href="https://www.linkedin.com/in/kate-m-moran/">Kate Moran</a> riff on the experience of using current AI tools, how AI systems may change our behavior and the application of AI to human-centered design.   </p><p>Sarah and Kate share their non-linear paths to becoming leading user experience (UX) designers. Defining the human-centric mindset Sarah stresses that intent is design and we are all designers. Kate and Sarah then challenge teams to resist short-term problem hunting for AI alone. This leads to an energized and frank debate about the tensions created by broad availability of AI tools with “shitty” user interfaces, why conversational interfaces aren’t the be-all-end-all and whether calls for more discernment and critical thinking are reasonable or even new. Kate and Sara then discuss their research into our nascent AI mental models and emergent impacts on user behavior. Kate discusses how AI can be used for UX design along with some far-fetched claims. Finally, both Kate and Sara share exciting areas of ongoing research.  </p><p><a href="https://www.linkedin.com/in/sarahegibbons/">Sarah Gibbons</a> and <a href="https://www.linkedin.com/in/kate-m-moran/">Kate Moran</a> are Vice Presidents at <a href="https://www.nngroup.com/">Nielson Norman Group</a> where they lead strategy, research, and design in the areas of human-centered design and user experience (UX). </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep49/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 03 Apr 2024 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/285b2380/15084369.mp3" length="43302557" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/LXsICp_nDryh4nzzqjlk5WzxYPorOyM0g-zQMbrn9RE/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE4MDE0MDYv/MTcxMTk3ODA2MS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2703</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/sarahegibbons/">Sarah Gibbons</a> and <a href="https://www.linkedin.com/in/kate-m-moran/">Kate Moran</a> riff on the experience of using current AI tools, how AI systems may change our behavior and the application of AI to human-centered design.   </p><p>Sarah and Kate share their non-linear paths to becoming leading user experience (UX) designers. Defining the human-centric mindset Sarah stresses that intent is design and we are all designers. Kate and Sarah then challenge teams to resist short-term problem hunting for AI alone. This leads to an energized and frank debate about the tensions created by broad availability of AI tools with “shitty” user interfaces, why conversational interfaces aren’t the be-all-end-all and whether calls for more discernment and critical thinking are reasonable or even new. Kate and Sara then discuss their research into our nascent AI mental models and emergent impacts on user behavior. Kate discusses how AI can be used for UX design along with some far-fetched claims. Finally, both Kate and Sara share exciting areas of ongoing research.  </p><p><a href="https://www.linkedin.com/in/sarahegibbons/">Sarah Gibbons</a> and <a href="https://www.linkedin.com/in/kate-m-moran/">Kate Moran</a> are Vice Presidents at <a href="https://www.nngroup.com/">Nielson Norman Group</a> where they lead strategy, research, and design in the areas of human-centered design and user experience (UX). </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep49/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/kate-moran" img="https://img.transistorcdn.com/aNSlrqVJgvePrY28X_t1JZNJs__C8mMf-X1OFmkCESc/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODBkMTY5Zjgt/MTY4MC00ZmYxLTk5/ZWItNzkwNWEyYTQ0/ZjIyLzE3MTE5Nzgy/NTYtaW1hZ2UuanBn.jpg">Kate Moran</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/sarah-gibbons" img="https://img.transistorcdn.com/iVQ9F30o00bcbMrowE95rTTt8WxiS9-RIGATiLo11JU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vZmZlMGVjM2Mt/ZTA4NS00ZTlhLWI2/MzctMjAxZGVhY2Fh/YmUwLzE3MTE5Nzgz/MDMtaW1hZ2UuanBn.jpg">Sarah Gibbons</podcast:person>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/285b2380/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Tech, Prosperity and Power with Simon Johnson</title>
      <itunes:episode>48</itunes:episode>
      <podcast:episode>48</podcast:episode>
      <itunes:title>Tech, Prosperity and Power with Simon Johnson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c31622a7-1b40-4792-9450-da5e3725aa1c</guid>
      <link>https://share.transistor.fm/s/e0cc5983</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/simon-johnson-17b40645/">Simon Johnson</a> takes on techno-optimism, the link between technology and human well-being, the law of intended consequences, the modern union remit and political will.</p><p>In this sobering tour through time, Simon proves that widespread human flourishing is not intrinsic to tech innovation. He challenges the ‘productivity bandwagon’ (an economic maxim so pervasive it did not have a name) and shows that productivity and market polarization often go hand-in-hand. Simon also views big tech’s persuasive powers through the lens of OpenAI’s board debacle.</p><p>Kimberly and Simon discuss the heyday of shared worker value, the commercial logic of automation and augmenting human work with technology. Simon highlights stakeholder capitalism’s current view of labor as a cost rather than people as a resource. He underscores the need for active attention to task creation, strong labor movements and participatory political action (shouting and all). Simon believes that shared prosperity is possible. Make no mistake, however, achieving it requires wisdom and hard work.</p><p><a href="https://www.linkedin.com/in/simon-johnson-17b40645/">Simon Johnson</a> is the Head of the Economics and Management group at MIT’s Sloan School of Management. Simon co-authored the stellar book “<a href="https://www.amazon.com/Power-Progress-Thousand-Year-Technology-Prosperity-ebook/dp/B0BD4DV59F">Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity</a> with Daren Acemoglu.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep48/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/simon-johnson-17b40645/">Simon Johnson</a> takes on techno-optimism, the link between technology and human well-being, the law of intended consequences, the modern union remit and political will.</p><p>In this sobering tour through time, Simon proves that widespread human flourishing is not intrinsic to tech innovation. He challenges the ‘productivity bandwagon’ (an economic maxim so pervasive it did not have a name) and shows that productivity and market polarization often go hand-in-hand. Simon also views big tech’s persuasive powers through the lens of OpenAI’s board debacle.</p><p>Kimberly and Simon discuss the heyday of shared worker value, the commercial logic of automation and augmenting human work with technology. Simon highlights stakeholder capitalism’s current view of labor as a cost rather than people as a resource. He underscores the need for active attention to task creation, strong labor movements and participatory political action (shouting and all). Simon believes that shared prosperity is possible. Make no mistake, however, achieving it requires wisdom and hard work.</p><p><a href="https://www.linkedin.com/in/simon-johnson-17b40645/">Simon Johnson</a> is the Head of the Economics and Management group at MIT’s Sloan School of Management. Simon co-authored the stellar book “<a href="https://www.amazon.com/Power-Progress-Thousand-Year-Technology-Prosperity-ebook/dp/B0BD4DV59F">Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity</a> with Daren Acemoglu.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep48/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 20 Mar 2024 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/e0cc5983/f6f3be77.mp3" length="36931989" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/kcBfxqnxNFAIn0tp4yBerdPJq5-FSENRuBT1AQi1ev8/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3OTc2MjMv/MTcxMDg2OTY5Ny1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2306</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/simon-johnson-17b40645/">Simon Johnson</a> takes on techno-optimism, the link between technology and human well-being, the law of intended consequences, the modern union remit and political will.</p><p>In this sobering tour through time, Simon proves that widespread human flourishing is not intrinsic to tech innovation. He challenges the ‘productivity bandwagon’ (an economic maxim so pervasive it did not have a name) and shows that productivity and market polarization often go hand-in-hand. Simon also views big tech’s persuasive powers through the lens of OpenAI’s board debacle.</p><p>Kimberly and Simon discuss the heyday of shared worker value, the commercial logic of automation and augmenting human work with technology. Simon highlights stakeholder capitalism’s current view of labor as a cost rather than people as a resource. He underscores the need for active attention to task creation, strong labor movements and participatory political action (shouting and all). Simon believes that shared prosperity is possible. Make no mistake, however, achieving it requires wisdom and hard work.</p><p><a href="https://www.linkedin.com/in/simon-johnson-17b40645/">Simon Johnson</a> is the Head of the Economics and Management group at MIT’s Sloan School of Management. Simon co-authored the stellar book “<a href="https://www.amazon.com/Power-Progress-Thousand-Year-Technology-Prosperity-ebook/dp/B0BD4DV59F">Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity</a> with Daren Acemoglu.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep48/transcript">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/simon-johnson" img="https://img.transistorcdn.com/TAzIUq_VYCgQTr0ItPxZVwULF2BEEqvFoGfn8oiLgCU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vYTlhMTdlNzAt/ZmJkYS00ZmZkLTkz/MzMtNTI5YzQ0M2Rk/MjM0LzE3MTA4Njk4/MTktaW1hZ2UuanBn.jpg">Simon Johnson</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/e0cc5983/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Raising Robots with Professor Rose Luckin</title>
      <itunes:episode>47</itunes:episode>
      <podcast:episode>47</podcast:episode>
      <itunes:title>Raising Robots with Professor Rose Luckin</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f5c5ad75-983f-4453-8767-f675d843ea9b</guid>
      <link>https://share.transistor.fm/s/bb3c20b2</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/rose-luckin-5245003/">Professor Rose Luckin</a> provides an engaging tutorial on the opportunities, risks, and challenges of AI in education and why AI raises the bar for human learning.      </p><p>Acknowledging AI’s real and present risks, Rose is optimistic about the power of AI to transform education and meet the needs of diverse student populations. From adaptive learning platforms to assistive tools, Rose highlights opportunities for AI to make us smarter, supercharge learner-educator engagement and level the educational playing field. Along the way, she confronts overconfidence in AI, the temptation to offload challenging cognitive workloads and the risk of constraining a learner’s choices prematurely. Rose also adroitly addresses conflicting visions of human quantification as the holy grail and the seeds of our demise. She asserts that AI ups the ante on education: how else can we deploy AI wisely? Rising to the challenge requires the hard work of tailoring strategies for specific learning communities and broad education about AI itself. </p><p>Rose Luckin is a Professor of Learner Centered Design at the UCL Knowledge Lab and Founder of <a href="https://www.educateventures.com/">EDUCATE Ventures Research Ltd.</a>, a London hub for educational technology start-ups, researchers and educators involved in evidence-based educational technology and leveraging data and AI for educational benefit. Explore Rose’s 2018 book <a href="https://www.educateventures.com/machine-learning-and-human-intelligence">Machine Learning and Human Intelligence</a> (free after creating account) and the EDUCATE Ventures newsletter <a href="https://www.educateventures.com/the-skinny">The Skinny</a>. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep47/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/rose-luckin-5245003/">Professor Rose Luckin</a> provides an engaging tutorial on the opportunities, risks, and challenges of AI in education and why AI raises the bar for human learning.      </p><p>Acknowledging AI’s real and present risks, Rose is optimistic about the power of AI to transform education and meet the needs of diverse student populations. From adaptive learning platforms to assistive tools, Rose highlights opportunities for AI to make us smarter, supercharge learner-educator engagement and level the educational playing field. Along the way, she confronts overconfidence in AI, the temptation to offload challenging cognitive workloads and the risk of constraining a learner’s choices prematurely. Rose also adroitly addresses conflicting visions of human quantification as the holy grail and the seeds of our demise. She asserts that AI ups the ante on education: how else can we deploy AI wisely? Rising to the challenge requires the hard work of tailoring strategies for specific learning communities and broad education about AI itself. </p><p>Rose Luckin is a Professor of Learner Centered Design at the UCL Knowledge Lab and Founder of <a href="https://www.educateventures.com/">EDUCATE Ventures Research Ltd.</a>, a London hub for educational technology start-ups, researchers and educators involved in evidence-based educational technology and leveraging data and AI for educational benefit. Explore Rose’s 2018 book <a href="https://www.educateventures.com/machine-learning-and-human-intelligence">Machine Learning and Human Intelligence</a> (free after creating account) and the EDUCATE Ventures newsletter <a href="https://www.educateventures.com/the-skinny">The Skinny</a>. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep47/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 06 Mar 2024 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/bb3c20b2/82f2dd0d.mp3" length="43554131" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/z4X6GbSqc_S6fSHd3kSXZ1M1VzwYg682jBj8EWFIyqc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3NzY3NDYv/MTcwOTY1NDkxMC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2720</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/rose-luckin-5245003/">Professor Rose Luckin</a> provides an engaging tutorial on the opportunities, risks, and challenges of AI in education and why AI raises the bar for human learning.      </p><p>Acknowledging AI’s real and present risks, Rose is optimistic about the power of AI to transform education and meet the needs of diverse student populations. From adaptive learning platforms to assistive tools, Rose highlights opportunities for AI to make us smarter, supercharge learner-educator engagement and level the educational playing field. Along the way, she confronts overconfidence in AI, the temptation to offload challenging cognitive workloads and the risk of constraining a learner’s choices prematurely. Rose also adroitly addresses conflicting visions of human quantification as the holy grail and the seeds of our demise. She asserts that AI ups the ante on education: how else can we deploy AI wisely? Rising to the challenge requires the hard work of tailoring strategies for specific learning communities and broad education about AI itself. </p><p>Rose Luckin is a Professor of Learner Centered Design at the UCL Knowledge Lab and Founder of <a href="https://www.educateventures.com/">EDUCATE Ventures Research Ltd.</a>, a London hub for educational technology start-ups, researchers and educators involved in evidence-based educational technology and leveraging data and AI for educational benefit. Explore Rose’s 2018 book <a href="https://www.educateventures.com/machine-learning-and-human-intelligence">Machine Learning and Human Intelligence</a> (free after creating account) and the EDUCATE Ventures newsletter <a href="https://www.educateventures.com/the-skinny">The Skinny</a>. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep47/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/professor-rose-luckin" img="https://img.transistorcdn.com/CVuvHL6o3CV-Z7b4DL49SWOCyuTou5Pd3-wfaZmqiKQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vYWFlMWMwYzIt/ZjdlOC00OTYxLTg2/NmYtYmIyNzlhNjY5/ZTE1LzE3MDk2NTUw/MjQtaW1hZ2UuanBn.jpg">Professor Rose Luckin</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/bb3c20b2/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The State of Play in AI Ethics with Katrina Ingram</title>
      <itunes:episode>46</itunes:episode>
      <podcast:episode>46</podcast:episode>
      <itunes:title>The State of Play in AI Ethics with Katrina Ingram</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">75f0a715-9ba8-4fbb-b92b-d08ba1e2d342</guid>
      <link>https://share.transistor.fm/s/3b4b4632</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/katrinareganingram/">Katrina Ingram</a> addresses AI power dynamics, regulatory floors and ethical ceilings, inevitability narratives, self-limiting predictions, and public AI education.   </p><p>Katrina traces her career from communications to her current pursuits in applied AI ethics. Showcasing her way with words, Katrina dissects popular AI narratives. While contemplating AI FOMO, she cautions against an engineering mentality and champions the power to say ‘no.’ Katrina contrasts buying groceries with AI solutions and describes regulations as the floor and ethics as the ceiling for responsible AI. Katrina then considers the sublimation of AI ethics into AI safety and risk management, whether Sci-Fi has led us astray and who decides what. We also discuss the law of diminishing returns, the inevitability narrative around AI, and how predictions based on the past can narrow future possibilities. Katrina commiserates with consumers but cautions against throwing privacy to the wind. Finally, she highlights the gap in funding for public education and literacy.  </p><p><a href="https://www.linkedin.com/in/katrinareganingram/">Katrina Ingram</a> is the Founder &amp; CEO <a href="https://www.ethicallyalignedai.com/">Ethically Aligned AI</a>, a Canadian consultancy enabling organizations to practically apply ethics in their AI pursuits. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep46/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/katrinareganingram/">Katrina Ingram</a> addresses AI power dynamics, regulatory floors and ethical ceilings, inevitability narratives, self-limiting predictions, and public AI education.   </p><p>Katrina traces her career from communications to her current pursuits in applied AI ethics. Showcasing her way with words, Katrina dissects popular AI narratives. While contemplating AI FOMO, she cautions against an engineering mentality and champions the power to say ‘no.’ Katrina contrasts buying groceries with AI solutions and describes regulations as the floor and ethics as the ceiling for responsible AI. Katrina then considers the sublimation of AI ethics into AI safety and risk management, whether Sci-Fi has led us astray and who decides what. We also discuss the law of diminishing returns, the inevitability narrative around AI, and how predictions based on the past can narrow future possibilities. Katrina commiserates with consumers but cautions against throwing privacy to the wind. Finally, she highlights the gap in funding for public education and literacy.  </p><p><a href="https://www.linkedin.com/in/katrinareganingram/">Katrina Ingram</a> is the Founder &amp; CEO <a href="https://www.ethicallyalignedai.com/">Ethically Aligned AI</a>, a Canadian consultancy enabling organizations to practically apply ethics in their AI pursuits. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep46/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 21 Feb 2024 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/3b4b4632/fab426c1.mp3" length="37527122" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/EJbpAxGiNuqprA_ku-y_Y2Kvo2-BzU0kava9qbVKT00/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3MTM4OTgv/MTcwODQzMDczMy1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2343</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/katrinareganingram/">Katrina Ingram</a> addresses AI power dynamics, regulatory floors and ethical ceilings, inevitability narratives, self-limiting predictions, and public AI education.   </p><p>Katrina traces her career from communications to her current pursuits in applied AI ethics. Showcasing her way with words, Katrina dissects popular AI narratives. While contemplating AI FOMO, she cautions against an engineering mentality and champions the power to say ‘no.’ Katrina contrasts buying groceries with AI solutions and describes regulations as the floor and ethics as the ceiling for responsible AI. Katrina then considers the sublimation of AI ethics into AI safety and risk management, whether Sci-Fi has led us astray and who decides what. We also discuss the law of diminishing returns, the inevitability narrative around AI, and how predictions based on the past can narrow future possibilities. Katrina commiserates with consumers but cautions against throwing privacy to the wind. Finally, she highlights the gap in funding for public education and literacy.  </p><p><a href="https://www.linkedin.com/in/katrinareganingram/">Katrina Ingram</a> is the Founder &amp; CEO <a href="https://www.ethicallyalignedai.com/">Ethically Aligned AI</a>, a Canadian consultancy enabling organizations to practically apply ethics in their AI pursuits. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep46/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/katrina-ingram" img="https://img.transistorcdn.com/-LpSHbfl0lzUOfdyHVSPMjNeHDSjqs8XkbVGxKZD3d0/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMjY1ZDBjMzAt/MzUyNC00YTFmLWI5/Y2YtNzFkZWE5OTgw/NDdiLzE3MDg0MzA4/NDItaW1hZ2UuanBn.jpg">Katrina Ingram</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/3b4b4632/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Public Interest, Politics and Privacy with Paulo Carvão</title>
      <itunes:episode>45</itunes:episode>
      <podcast:episode>45</podcast:episode>
      <itunes:title>Public Interest, Politics and Privacy with Paulo Carvão</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f8c46562-e6f2-4568-af5c-f45ec0bb930e</guid>
      <link>https://share.transistor.fm/s/fe377220</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/paulocarvao/">Paulo Carvão</a> discusses AI’s impact on the public interest, emerging regulatory schemes, progress over perfection, and education as the lynchpin for ethical tech.           </p><p>In this thoughtful discussion, Paulo outlines the cultural, ideological and business factors underpinning the current data economy. An economy in which the manipulation of personal data into private corporate assets is foundational. Opting for optimism over cynicism, Paul advocates for a first principles approach to ethical development of AI and emerging tech. He argues that regulation creates a positive tension that enables innovation. Paulo examines the emerging regulatory regimes of the EU, the US and China. Preferencing progress over perfection, he describes why regulating technology for technology’s sake is fraught. Acknowledging the challenge facing existing school systems, Paulo articulates the foundational elements required of a ‘bilingual’ education to enable future generations to “do the right things.”  </p><p><a href="https://www.linkedin.com/in/paulocarvao/">Paulo Carvão</a> is a Senior Fellow at the Harvard Advanced Leadership Initiative, a global tech executive and investor. Follow his writings and subscribe to his newsletter on the <a href="https://carvao.substack.com/">Tech and Democracy</a> substack.  </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep45/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/paulocarvao/">Paulo Carvão</a> discusses AI’s impact on the public interest, emerging regulatory schemes, progress over perfection, and education as the lynchpin for ethical tech.           </p><p>In this thoughtful discussion, Paulo outlines the cultural, ideological and business factors underpinning the current data economy. An economy in which the manipulation of personal data into private corporate assets is foundational. Opting for optimism over cynicism, Paul advocates for a first principles approach to ethical development of AI and emerging tech. He argues that regulation creates a positive tension that enables innovation. Paulo examines the emerging regulatory regimes of the EU, the US and China. Preferencing progress over perfection, he describes why regulating technology for technology’s sake is fraught. Acknowledging the challenge facing existing school systems, Paulo articulates the foundational elements required of a ‘bilingual’ education to enable future generations to “do the right things.”  </p><p><a href="https://www.linkedin.com/in/paulocarvao/">Paulo Carvão</a> is a Senior Fellow at the Harvard Advanced Leadership Initiative, a global tech executive and investor. Follow his writings and subscribe to his newsletter on the <a href="https://carvao.substack.com/">Tech and Democracy</a> substack.  </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep45/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 07 Feb 2024 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/fe377220/496594e2.mp3" length="43746838" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/_0xV88T1UYbI3boI8DvPwnqcWUxV8KyUnzilpH4arL4/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3MTM4OTcv/MTcwNzI1MTY4Ny1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2732</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/paulocarvao/">Paulo Carvão</a> discusses AI’s impact on the public interest, emerging regulatory schemes, progress over perfection, and education as the lynchpin for ethical tech.           </p><p>In this thoughtful discussion, Paulo outlines the cultural, ideological and business factors underpinning the current data economy. An economy in which the manipulation of personal data into private corporate assets is foundational. Opting for optimism over cynicism, Paul advocates for a first principles approach to ethical development of AI and emerging tech. He argues that regulation creates a positive tension that enables innovation. Paulo examines the emerging regulatory regimes of the EU, the US and China. Preferencing progress over perfection, he describes why regulating technology for technology’s sake is fraught. Acknowledging the challenge facing existing school systems, Paulo articulates the foundational elements required of a ‘bilingual’ education to enable future generations to “do the right things.”  </p><p><a href="https://www.linkedin.com/in/paulocarvao/">Paulo Carvão</a> is a Senior Fellow at the Harvard Advanced Leadership Initiative, a global tech executive and investor. Follow his writings and subscribe to his newsletter on the <a href="https://carvao.substack.com/">Tech and Democracy</a> substack.  </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep45/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/paulo-carvao" img="https://img.transistorcdn.com/vMSbo_ou7hq1RgmffKb3xbKa24ODR6FEsyTCYgv7oAg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vYjA4YjZjMzAt/OTM2OC00ZWE1LWIw/YzItMDYyZDdhZGVi/OGI1LzE3MDY5NzY1/MjQtaW1hZ2UuanBn.jpg">Paulo Carvão</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/fe377220/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>AI at Work w/ Christina Colclough</title>
      <itunes:episode>44</itunes:episode>
      <podcast:episode>44</podcast:episode>
      <itunes:title>AI at Work w/ Christina Colclough</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">77e00c3b-edc6-426b-b73c-eb79164ba159</guid>
      <link>https://share.transistor.fm/s/5c812b5c</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/christinajcolclough/">Dr. Christina Jayne Colclough</a> reflects on AI Regulations at Work.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/christinajcolclough/">Dr. Christina Jayne Colclough</a> reflects on AI Regulations at Work.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p>]]>
      </content:encoded>
      <pubDate>Fri, 22 Dec 2023 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/5c812b5c/9b03cd3e.mp3" length="14106390" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/QJWPMVRvDoHXjFnlFKVm7Qzmcjtwswmk7VhsTKK-aGc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MjkyNjMv/MTcwMjkzNDg1MS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>879</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/christinajcolclough/">Dr. Christina Jayne Colclough</a> reflects on AI Regulations at Work.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/dr-christina-jayne-colclough" img="https://img.transistorcdn.com/5f3QWSTvJmHlxOqTEtuudsbLUtcWgLkjXEfMyxD53ow/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODJkYjI1ODUt/M2E1ZS00YmExLWE3/YmYtNzUyMDM0Yjlk/ZWYzLzE2ODE3NDIw/NjEtaW1hZ2UuanBn.jpg">Dr. Christina Jayne Colclough</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/5c812b5c/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Putting Inclusion To Work w/ Giselle Mota</title>
      <itunes:episode>43</itunes:episode>
      <podcast:episode>43</podcast:episode>
      <itunes:title>Putting Inclusion To Work w/ Giselle Mota</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5fbb17dc-021a-4e7b-a9ab-71d77701b614</guid>
      <link>https://share.transistor.fm/s/73450c6b</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/gmota/">Giselle Mota</a> reflects on Inclusion at Work in the age of AI.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/gmota/">Giselle Mota</a> reflects on Inclusion at Work in the age of AI.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p>]]>
      </content:encoded>
      <pubDate>Thu, 21 Dec 2023 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/73450c6b/882b60ad.mp3" length="11459881" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/BSp8oA0nNA0LKmMv-iB-clmUhCqGTIZlzbOyqkNbdao/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MjkyNjIv/MTcwMjkzNDc4My1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>714</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/gmota/">Giselle Mota</a> reflects on Inclusion at Work in the age of AI.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/giselle-mota" img="https://img.transistorcdn.com/V0REStWt4EWPcxAgFpKymLQsD3JZsf4psGmfgr8Jnd0/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vNWVhZTQ4MGUt/ODNkNS00ZmE2LTk5/MTMtMTIxOGZmODk4/NTg0LzE2NzMzNzk5/MDQtaW1hZ2UuanBn.jpg">Giselle Mota</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/73450c6b/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>GAI in the Enterprise w/ Ganes Kesari</title>
      <itunes:episode>42</itunes:episode>
      <podcast:episode>42</podcast:episode>
      <itunes:title>GAI in the Enterprise w/ Ganes Kesari</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e8410299-28b6-424e-84e7-540578750ebe</guid>
      <link>https://share.transistor.fm/s/ee83ff06</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/gkesari/">Ganes Kesari</a> reflects on generative AI (GAI) in the Enterprise.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/gkesari/">Ganes Kesari</a> reflects on generative AI (GAI) in the Enterprise.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p>]]>
      </content:encoded>
      <pubDate>Wed, 20 Dec 2023 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/ee83ff06/bb0590d1.mp3" length="9645516" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/IX4n8mc9lEjxJWc3QvWVuCfH_4Cw_iasb0MoTEgU6ow/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MjkyNjEv/MTcwMzAxNjYxNy1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>600</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/gkesari/">Ganes Kesari</a> reflects on generative AI (GAI) in the Enterprise.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://gkesari.com/bio/" img="https://img.transistorcdn.com/ywqNE1EpVaASY4RFVRTW67oD5tV6cknwe7CHpedNcQ4/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vZjcxNmJmMDMt/MmNjOC00ZjEwLWFj/NTUtNTA0NzkyZjJl/M2E3LzE2ODMwMzcw/OTQtaW1hZ2UuanBn.jpg">Ganes Kesari</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/ee83ff06/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Digital Ethics and Regulation w/ Chris McClean</title>
      <itunes:episode>41</itunes:episode>
      <podcast:episode>41</podcast:episode>
      <itunes:title>Digital Ethics and Regulation w/ Chris McClean</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b6db1483-4dc3-4af4-a8a4-e7653b639c99</guid>
      <link>https://share.transistor.fm/s/a732db2d</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/chris-mcclean/">Chris McClean</a> reflects on Digital Ethics and Regulation in AI today.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/chris-mcclean/">Chris McClean</a> reflects on Digital Ethics and Regulation in AI today.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p>]]>
      </content:encoded>
      <pubDate>Tue, 19 Dec 2023 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/a732db2d/d5d31eaa.mp3" length="12202181" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/C7ZhMPtvzNmP0bG5pA0uYniU3dUJcWXf8k9MWJoDQYM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MjkyNjAv/MTcwMjU2OTcxOC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>760</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/chris-mcclean/">Chris McClean</a> reflects on Digital Ethics and Regulation in AI today.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/chris-mcclean" img="https://img.transistorcdn.com/hGFY24UxlkfaLGpUuxli0hQlUn1HeXHj-gdM16B2DWg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vOTA0NjYzNDgt/YzdkMC00YzIyLTli/MmMtYzZjNDM5NjY3/YmMzLzE2NzY1NjM1/MDQtaW1hZ2UuanBn.jpg">Chris McClean</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/a732db2d/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Making Model Decisions w/ Dr. Erica Thompson</title>
      <itunes:episode>40</itunes:episode>
      <podcast:episode>40</podcast:episode>
      <itunes:title>Making Model Decisions w/ Dr. Erica Thompson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">49cecc97-0d92-4c9d-afba-8c44fb7be78d</guid>
      <link>https://share.transistor.fm/s/6941df91</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/erica-thompson-6aaa6b45/">Dr. Erica Thompson</a> reflects on Making Model Decisions about and with AI.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p><p>To learn more, check out Erica’s book <a href="https://www.amazon.com/Escape-from-Model-Land/dp/1529364884/">Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/erica-thompson-6aaa6b45/">Dr. Erica Thompson</a> reflects on Making Model Decisions about and with AI.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p><p>To learn more, check out Erica’s book <a href="https://www.amazon.com/Escape-from-Model-Land/dp/1529364884/">Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 18 Dec 2023 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/6941df91/ef3a769e.mp3" length="7762199" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/ul6jx25v0LYwfrEkA0Qi5-LBhZ4VSjCSkiwarbbNxgI/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MjkyNTkv/MTcwMjU2OTY2OC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>483</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/erica-thompson-6aaa6b45/">Dr. Erica Thompson</a> reflects on Making Model Decisions about and with AI.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p><p>To learn more, check out Erica’s book <a href="https://www.amazon.com/Escape-from-Model-Land/dp/1529364884/">Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It</a></p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/dr-erica-thompson" img="https://img.transistorcdn.com/98JKWnR4GyHa_D_YsGcS9KiTnpUL0Ud1rYjRIix5Vdg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMTI4NTY5NzIt/Y2FjZC00YjY4LWJh/NGQtMDNkNmU1ODg0/MGM4LzE2NzMzNzgx/MzEtaW1hZ2UuanBn.jpg">Dr Erica Thompson</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/6941df91/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Upskilling Human Decision Making w/ Roger Spitz</title>
      <itunes:episode>39</itunes:episode>
      <podcast:episode>39</podcast:episode>
      <itunes:title>Upskilling Human Decision Making w/ Roger Spitz</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b92fc504-d071-4dc7-a683-fcadb6144ded</guid>
      <link>https://share.transistor.fm/s/894300b2</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/rogerspitz/">Roger Spitz</a> reflects on Upskilling Human Decision Making in the age of AI.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p><p>To learn more, check out Roger’s book series <a href="https://www.amazon.com/gp/product/B0BN4HHPJQ">The Definitive Guide to Thriving on Disruption</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/rogerspitz/">Roger Spitz</a> reflects on Upskilling Human Decision Making in the age of AI.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p><p>To learn more, check out Roger’s book series <a href="https://www.amazon.com/gp/product/B0BN4HHPJQ">The Definitive Guide to Thriving on Disruption</a></p>]]>
      </content:encoded>
      <pubDate>Sun, 17 Dec 2023 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/894300b2/ac9ca0df.mp3" length="10846323" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/9X4-P_K0vAGks05IB9j3QxOGBBNogp18hIAAKyM7q8U/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MjkyNTgv/MTcwMjU2OTY3Ny1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>676</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/rogerspitz/">Roger Spitz</a> reflects on Upskilling Human Decision Making in the age of AI.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p><p>To learn more, check out Roger’s book series <a href="https://www.amazon.com/gp/product/B0BN4HHPJQ">The Definitive Guide to Thriving on Disruption</a></p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/roger-spitz" img="https://img.transistorcdn.com/awxpCqhzAWdyD0PzUEWre_v-s2I_CO7vP14kA8w1I_k/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vNTY3N2FhM2Et/YjA3NC00ODM2LTlj/ZGItZjMxNDE2Mjgx/OTc5LzE2NzMzNzg0/NDYtaW1hZ2UuanBn.jpg">Roger Spitz</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/894300b2/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Systems-Thinking in AI w/ Sheryl Cababa</title>
      <itunes:episode>38</itunes:episode>
      <podcast:episode>38</podcast:episode>
      <itunes:title>Systems-Thinking in AI w/ Sheryl Cababa</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f90665b4-4f81-46ad-a3d3-04415498eae8</guid>
      <link>https://share.transistor.fm/s/e56d8f08</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/sherylcababa/">Sheryl Cababa</a> reflects on Systems Thinking in AI design.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next. </p><p>To learn more, check out Sheryl’s book <a href="https://www.amazon.com/Closing-Loop-Systems-Thinking-Designers/dp/1959029886">Closing the Loop: Systems Thinking for Designers</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/sherylcababa/">Sheryl Cababa</a> reflects on Systems Thinking in AI design.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next. </p><p>To learn more, check out Sheryl’s book <a href="https://www.amazon.com/Closing-Loop-Systems-Thinking-Designers/dp/1959029886">Closing the Loop: Systems Thinking for Designers</a></p>]]>
      </content:encoded>
      <pubDate>Sat, 16 Dec 2023 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/e56d8f08/c6b355ec.mp3" length="12619298" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/NHt1Qd5c_oONLFK__a4rlWiBIs4UW0V31XLH4SFayqQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MjkyNTcv/MTcwMjU2OTY4OS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>786</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/sherylcababa/">Sheryl Cababa</a> reflects on Systems Thinking in AI design.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next. </p><p>To learn more, check out Sheryl’s book <a href="https://www.amazon.com/Closing-Loop-Systems-Thinking-Designers/dp/1959029886">Closing the Loop: Systems Thinking for Designers</a></p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/sheryl-cababa" img="https://img.transistorcdn.com/Ihz9D7m65B8z3gE280bq5uNT8D586p7SHL3y_IUuvFU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vYzJhMDBiYzUt/OGExMC00MjljLTg0/OTUtMzAwZjU0YzJm/ZThlLzE2NzMzNzc4/ODMtaW1hZ2UuanBn.jpg">Sheryl Cababa</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/e56d8f08/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>GAI Detection and Protection w/ Ilke Demir</title>
      <itunes:episode>37</itunes:episode>
      <podcast:episode>37</podcast:episode>
      <itunes:title>GAI Detection and Protection w/ Ilke Demir</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4127d5cd-f984-4160-8271-8621564995dc</guid>
      <link>https://share.transistor.fm/s/d35a97e1</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/ilkedemir/">Ilke Demir</a> reflects on Generative AI (GAI) Detection and Protection.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/ilkedemir/">Ilke Demir</a> reflects on Generative AI (GAI) Detection and Protection.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p>]]>
      </content:encoded>
      <pubDate>Fri, 15 Dec 2023 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/d35a97e1/a0ee247e.mp3" length="9924300" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/YAOJAzWrIc7_FgKOGKwH-RpQSlRInN61OrCiyKW9CyU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MjkyNTYv/MTcwMjU2OTcwMi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>618</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/ilkedemir/">Ilke Demir</a> reflects on Generative AI (GAI) Detection and Protection.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://speakers.acm.org/speakers/demir_14351" img="https://img.transistorcdn.com/7v_s2X8XEZ65fFntMFilRVI1ZkP7YjdGhqH2JFivrAA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODQ1ZWRhOWQt/MjE1NC00YjhiLTg2/MzQtNzA4MDQyZDIy/ODBmLzE2Nzg5Njgz/NDAtaW1hZ2UuanBn.jpg">Ilke Demir</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/d35a97e1/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>LLMs and Beyond w/ Mark Bishop</title>
      <itunes:episode>36</itunes:episode>
      <podcast:episode>36</podcast:episode>
      <itunes:title>LLMs and Beyond w/ Mark Bishop</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a6c5d22a-a0f9-4437-9767-060ccaf538cc</guid>
      <link>https://share.transistor.fm/s/2c0a41d7</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/profjmarkbishop/">Professor J Mark Bishop</a> reflects on large language models (LLM) and beyond.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.  </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/profjmarkbishop/">Professor J Mark Bishop</a> reflects on large language models (LLM) and beyond.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.  </p>]]>
      </content:encoded>
      <pubDate>Thu, 14 Dec 2023 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/2c0a41d7/d2af2624.mp3" length="9504657" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/AwcHjiLZKIJ2C50foQ_ZKzJqC3WWKTzoGKDb1Ca2rhM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MjkyNTQv/MTcwMjE2NzQzNi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>592</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/profjmarkbishop/">Professor J Mark Bishop</a> reflects on large language models (LLM) and beyond.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.  </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/professor-j-mark-bishop" img="https://img.transistorcdn.com/SXQABzYRVdCpo5ygS3lZKdJxzI_ffSqh2shFCuMXNJ8/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vOGJjMGExY2Et/NWVkNC00ZjMwLWJm/M2EtZTkxYzk0YzIy/Nzk5LzE2NzgxMTY3/NjktaW1hZ2UuanBn.jpg">Professor J Mark Bishop</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/2c0a41d7/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Environmental &amp; Social Sustainability w/ Henrik Skaug Saetra</title>
      <itunes:episode>35</itunes:episode>
      <podcast:episode>35</podcast:episode>
      <itunes:title>Environmental &amp; Social Sustainability w/ Henrik Skaug Saetra</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5460099b-bce2-4b30-9fa5-1053316e0b7b</guid>
      <link>https://share.transistor.fm/s/a09f5948</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/henriksaetra/">Henrik Skaug Sætra</a> reflects on Environmental and Social Sustainability with AI.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next. To learn more, check out Henrik’s latest book: <a href="https://www.amazon.com/Technology-Sustainable-Development-Henrik-Skaug/dp/1032350563/">Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/henriksaetra/">Henrik Skaug Sætra</a> reflects on Environmental and Social Sustainability with AI.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next. To learn more, check out Henrik’s latest book: <a href="https://www.amazon.com/Technology-Sustainable-Development-Henrik-Skaug/dp/1032350563/">Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism</a></p>]]>
      </content:encoded>
      <pubDate>Wed, 13 Dec 2023 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/a09f5948/9b1b56e6.mp3" length="11098157" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/0nuFhLrHfWd6Puyz5GzxzeGwr9vhQxcCZyr3hYHa3WA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MjkyNTEv/MTcwMjE2NzQyMS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>691</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/henriksaetra/">Henrik Skaug Sætra</a> reflects on Environmental and Social Sustainability with AI.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next. To learn more, check out Henrik’s latest book: <a href="https://www.amazon.com/Technology-Sustainable-Development-Henrik-Skaug/dp/1032350563/">Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism</a></p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/henrik-skaug-saetra" img="https://img.transistorcdn.com/hY8GiGYlMIs3oiAcDwdUFUj2JgrvWdAzZLa0sjah4u4/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vOTQ1ZGY5MGUt/YjMyNy00NjZkLThk/OGQtZTNiYmRkMTU1/NzNiLzE2NzU2OTM4/MjYtaW1hZ2UuanBn.jpg">Henrik Skaug Sætra</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/a09f5948/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Policymaking and Accessibility in AI w/ Yonah Welker</title>
      <itunes:episode>34</itunes:episode>
      <podcast:episode>34</podcast:episode>
      <itunes:title>Policymaking and Accessibility in AI w/ Yonah Welker</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f9ab006d-eded-42e9-ba21-0c84a9246792</guid>
      <link>https://share.transistor.fm/s/61bf83fa</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/welker/">Yonah Welker</a> reflects on Policymaking, Inclusion and Accessibility in AI today.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.  </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/welker/">Yonah Welker</a> reflects on Policymaking, Inclusion and Accessibility in AI today.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.  </p>]]>
      </content:encoded>
      <pubDate>Tue, 12 Dec 2023 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/61bf83fa/ea717969.mp3" length="10692516" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/U7dJx8TlGJ_VOPp7lYzMmv3UqNzXx1PQZjOLZfVD7Zs/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wN2Ji/ODU0MWM4NGE3MmU2/MzFmNTk5MDU3ZDRi/MDVkNC5qcGc.jpg"/>
      <itunes:duration>666</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/welker/">Yonah Welker</a> reflects on Policymaking, Inclusion and Accessibility in AI today.</p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.  </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/yonah-welker" img="https://img.transistorcdn.com/o8jD2rdEGk46X0622pzwxupPpCx5B1cvM6wpRYURGUw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vYmQwNzIyMDQt/NWQzZS00OGUxLWEw/NjUtZDQ4YTk2MzMz/MjMwLzE2NzMzNzk4/NDYtaW1hZ2UuanBn.jpg">Yonah Welker</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/61bf83fa/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Human-AI Interaction w/ Marisa Tschopp</title>
      <itunes:episode>33</itunes:episode>
      <podcast:episode>33</podcast:episode>
      <itunes:title>Human-AI Interaction w/ Marisa Tschopp</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">55b723d2-2605-46e2-beec-38449eae063b</guid>
      <link>https://share.transistor.fm/s/7fdc5b5d</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/marisa-tschopp-0233a026/">Marisa Tschopp</a> reflects on Human-AI interactions in AI. </p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.  </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/marisa-tschopp-0233a026/">Marisa Tschopp</a> reflects on Human-AI interactions in AI. </p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.  </p>]]>
      </content:encoded>
      <pubDate>Mon, 11 Dec 2023 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/7fdc5b5d/a2a1b2dd.mp3" length="12289527" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/CdPXoquvUFm2nCh7KY85i9-aBvLWqpcT0CsGXKAndLQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MjkyNDgv/MTcwMjE2NzQwNC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>766</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/marisa-tschopp-0233a026/">Marisa Tschopp</a> reflects on Human-AI interactions in AI. </p><p>In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.  </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://www.scip.ch/en/?team.mats" img="https://img.transistorcdn.com/8EezJ9QOT9aFXeQd0vPx2duaCA3m2lJf1rW6qQkUikU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDFlZDgxMzgt/NmIxMy00MmJiLThi/ZmQtZWRlZmFmZmYx/NTM2LzE2NzMzNzgy/OTYtaW1hZ2UuanBn.jpg">Marisa Tschopp</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/7fdc5b5d/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Regulatory Progress &amp; Pitfalls w/ Patrick Hall</title>
      <itunes:episode>32</itunes:episode>
      <podcast:episode>32</podcast:episode>
      <itunes:title>Regulatory Progress &amp; Pitfalls w/ Patrick Hall</itunes:title>
      <itunes:episodeType>bonus</itunes:episodeType>
      <guid isPermaLink="false">c6188f2a-01cc-4496-9545-885b6a5e1e6e</guid>
      <link>https://share.transistor.fm/s/a73ad287</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/jpatrickhall/">Patrick Hall</a> drops in to provide a current take on risk, reward and regulation in AI today.</p><p>In this bonus episode, Patrick reflects on the evolving state of play in AI regulations, consumer awareness and education.  </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/jpatrickhall/">Patrick Hall</a> drops in to provide a current take on risk, reward and regulation in AI today.</p><p>In this bonus episode, Patrick reflects on the evolving state of play in AI regulations, consumer awareness and education.  </p>]]>
      </content:encoded>
      <pubDate>Wed, 06 Dec 2023 05:30:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/a73ad287/f8d73f37.mp3" length="18739756" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/nBrOudG8YynnSwQoi_L_Bk4z63_PR26QSSI_7KWLjI4/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MjE4ODMv/MTcwMTcyMjE5Mi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1168</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/jpatrickhall/">Patrick Hall</a> drops in to provide a current take on risk, reward and regulation in AI today.</p><p>In this bonus episode, Patrick reflects on the evolving state of play in AI regulations, consumer awareness and education.  </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/patrick-hall" img="https://img.transistorcdn.com/VhuTvkmAtrtYLG9wiM4AvCRDJcbXp15T_fQlEZCeGxk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vZmJmYTNjOTYt/NjhmOC00MDk0LWE0/NDItMzMzZmNjNDkz/YmZjLzE2NzMzNzk3/MTUtaW1hZ2UuanBn.jpg">Patrick Hall</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/a73ad287/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>AI Stories at Work with Ganes Kesari</title>
      <itunes:episode>31</itunes:episode>
      <podcast:episode>31</podcast:episode>
      <itunes:title>AI Stories at Work with Ganes Kesari</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b18ffa01-f2c3-43de-a6bd-0e6ba775c164</guid>
      <link>https://share.transistor.fm/s/59de6aac</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/gkesari/">Ganes Kesari</a> confronts AI hype and calls for balance, reskilling, data literacy, decision intelligence and data storytelling to adopt AI productively.                     </p><p><br>Ganes reveals the reality of AI and analytics adoption in the enterprise today. Highlighting extreme divides in understanding and expectations, Ganes provides a grounded point of view on delivering sustained business value. </p><p><br>Cautioning against a technocentric approach, Ganes discusses the role of data literacy and data translators in enabling AI adoption. Discussing common barriers to change, Kimberly and Ganes discuss growing resistance from technologists, not just end users. Ganes muses about the impact of AI on creative tasks and his own experiences with generative AI. Ganes also underscores the need to address workforce reskilling yet remains optimistic about the future of human endeavor. While discussing the need for improved decision-making, Ganes identifies decision intelligence as a critical new business competency. Finally, Ganes strongly advocates for taking a business-first approach and using data storytelling as part of the responsible AI and analytics toolkit.  </p><p><br><a href="https://www.linkedin.com/in/gkesari/">Ganes Kesari</a> is the co-founder and Chief Decision Scientist for Gramener and Innovation Titan. </p><p><br>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep31/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/gkesari/">Ganes Kesari</a> confronts AI hype and calls for balance, reskilling, data literacy, decision intelligence and data storytelling to adopt AI productively.                     </p><p><br>Ganes reveals the reality of AI and analytics adoption in the enterprise today. Highlighting extreme divides in understanding and expectations, Ganes provides a grounded point of view on delivering sustained business value. </p><p><br>Cautioning against a technocentric approach, Ganes discusses the role of data literacy and data translators in enabling AI adoption. Discussing common barriers to change, Kimberly and Ganes discuss growing resistance from technologists, not just end users. Ganes muses about the impact of AI on creative tasks and his own experiences with generative AI. Ganes also underscores the need to address workforce reskilling yet remains optimistic about the future of human endeavor. While discussing the need for improved decision-making, Ganes identifies decision intelligence as a critical new business competency. Finally, Ganes strongly advocates for taking a business-first approach and using data storytelling as part of the responsible AI and analytics toolkit.  </p><p><br><a href="https://www.linkedin.com/in/gkesari/">Ganes Kesari</a> is the co-founder and Chief Decision Scientist for Gramener and Innovation Titan. </p><p><br>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep31/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 03 May 2023 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/59de6aac/3215453d.mp3" length="40487360" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/0SJe1AutHKoObPjOD92Ybh1ZEFp4SkxqRnXg862ZrqE/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEzMDQ1ODEv/MTY4MzAzNjkzNy1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2528</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/gkesari/">Ganes Kesari</a> confronts AI hype and calls for balance, reskilling, data literacy, decision intelligence and data storytelling to adopt AI productively.                     </p><p><br>Ganes reveals the reality of AI and analytics adoption in the enterprise today. Highlighting extreme divides in understanding and expectations, Ganes provides a grounded point of view on delivering sustained business value. </p><p><br>Cautioning against a technocentric approach, Ganes discusses the role of data literacy and data translators in enabling AI adoption. Discussing common barriers to change, Kimberly and Ganes discuss growing resistance from technologists, not just end users. Ganes muses about the impact of AI on creative tasks and his own experiences with generative AI. Ganes also underscores the need to address workforce reskilling yet remains optimistic about the future of human endeavor. While discussing the need for improved decision-making, Ganes identifies decision intelligence as a critical new business competency. Finally, Ganes strongly advocates for taking a business-first approach and using data storytelling as part of the responsible AI and analytics toolkit.  </p><p><br><a href="https://www.linkedin.com/in/gkesari/">Ganes Kesari</a> is the co-founder and Chief Decision Scientist for Gramener and Innovation Titan. </p><p><br>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep31/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://gkesari.com/bio/" img="https://img.transistorcdn.com/ywqNE1EpVaASY4RFVRTW67oD5tV6cknwe7CHpedNcQ4/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vZjcxNmJmMDMt/MmNjOC00ZjEwLWFj/NTUtNTA0NzkyZjJl/M2E3LzE2ODMwMzcw/OTQtaW1hZ2UuanBn.jpg">Ganes Kesari</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/59de6aac/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Keeping Work Human with Dr. Christina Colclough</title>
      <itunes:episode>30</itunes:episode>
      <podcast:episode>30</podcast:episode>
      <itunes:title>Keeping Work Human with Dr. Christina Colclough</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a7a624b1-aff7-4906-a2e3-75a0276368fd</guid>
      <link>https://share.transistor.fm/s/27c8b458</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/christinajcolclough/">Dr. Christina Colclough</a> addresses tech determinism, the value of human labor, managerial fuzz, collective will, digital rights, and participatory AI deployment.</p><p>Christina traces the path of digital transformation and the self-sustaining narrative of tech determinism. As well as how the perceptions of the public, the C-Suite and workers (aka wage earners) diverge. Thereby highlighting the urgent need for robust public dialogue, education and collective action.</p><p>Championing constructive debate, Christina decries ‘for-it-or-against-it’ views on AI and embraces the Luddite label. Kimberly and Christina discuss the value of human work, we vs. they work cultures, the divisiveness of digital platforms, and sustainable governance. Christina questions why emerging AI regulations give workers short shift and whether regulation is being privatized. She underscores the dangers of stupid algorithms and the quantification of humans. But notes that knowledge is key to tapping into AI’s benefits while avoiding harm. Christina ends with a persuasive call for responsible regulation, radical transparency and widespread communication to combat collective ignorance.</p><p><a href="https://www.linkedin.com/in/christinajcolclough/">Dr. Christina Jayne Colclough</a> is the founder of The Why Not Lab where she fiercely advocates for worker rights and dignity for all in the digital age.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep30/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/christinajcolclough/">Dr. Christina Colclough</a> addresses tech determinism, the value of human labor, managerial fuzz, collective will, digital rights, and participatory AI deployment.</p><p>Christina traces the path of digital transformation and the self-sustaining narrative of tech determinism. As well as how the perceptions of the public, the C-Suite and workers (aka wage earners) diverge. Thereby highlighting the urgent need for robust public dialogue, education and collective action.</p><p>Championing constructive debate, Christina decries ‘for-it-or-against-it’ views on AI and embraces the Luddite label. Kimberly and Christina discuss the value of human work, we vs. they work cultures, the divisiveness of digital platforms, and sustainable governance. Christina questions why emerging AI regulations give workers short shift and whether regulation is being privatized. She underscores the dangers of stupid algorithms and the quantification of humans. But notes that knowledge is key to tapping into AI’s benefits while avoiding harm. Christina ends with a persuasive call for responsible regulation, radical transparency and widespread communication to combat collective ignorance.</p><p><a href="https://www.linkedin.com/in/christinajcolclough/">Dr. Christina Jayne Colclough</a> is the founder of The Why Not Lab where she fiercely advocates for worker rights and dignity for all in the digital age.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep30/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 19 Apr 2023 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/27c8b458/122c130f.mp3" length="45050205" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/oFRWz3XPgvnCucuaLH83ogGcmBA5PCzpdHzLsVX5-EY/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEyODI5NzUv/MTY4MTc0MjI4MS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2812</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/christinajcolclough/">Dr. Christina Colclough</a> addresses tech determinism, the value of human labor, managerial fuzz, collective will, digital rights, and participatory AI deployment.</p><p>Christina traces the path of digital transformation and the self-sustaining narrative of tech determinism. As well as how the perceptions of the public, the C-Suite and workers (aka wage earners) diverge. Thereby highlighting the urgent need for robust public dialogue, education and collective action.</p><p>Championing constructive debate, Christina decries ‘for-it-or-against-it’ views on AI and embraces the Luddite label. Kimberly and Christina discuss the value of human work, we vs. they work cultures, the divisiveness of digital platforms, and sustainable governance. Christina questions why emerging AI regulations give workers short shift and whether regulation is being privatized. She underscores the dangers of stupid algorithms and the quantification of humans. But notes that knowledge is key to tapping into AI’s benefits while avoiding harm. Christina ends with a persuasive call for responsible regulation, radical transparency and widespread communication to combat collective ignorance.</p><p><a href="https://www.linkedin.com/in/christinajcolclough/">Dr. Christina Jayne Colclough</a> is the founder of The Why Not Lab where she fiercely advocates for worker rights and dignity for all in the digital age.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep30/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/dr-christina-jayne-colclough" img="https://img.transistorcdn.com/5f3QWSTvJmHlxOqTEtuudsbLUtcWgLkjXEfMyxD53ow/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODJkYjI1ODUt/M2E1ZS00YmExLWE3/YmYtNzUyMDM0Yjlk/ZWYzLzE2ODE3NDIw/NjEtaW1hZ2UuanBn.jpg">Dr. Christina Jayne Colclough</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/27c8b458/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Practical Ethics with Reid Blackman </title>
      <itunes:episode>29</itunes:episode>
      <podcast:episode>29</podcast:episode>
      <itunes:title>Practical Ethics with Reid Blackman </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c4d9e4bc-3bc3-452b-a52a-807d7ca95159</guid>
      <link>https://share.transistor.fm/s/c83e5481</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/reid-blackman/">Reid Blackman</a> confronts whack-a-mole approaches to AI ethics, ethical ‘do goodery,’ squishy values, moral nuance, advocacy vs. activism and overfitting for AI.</p><p>Reid distinguishes AI for ‘not bad’ from AI ‘for good’ and corporate social responsibility. He describes how the language of risk creates a bridge between ethics and business. Debunking the notion of ethicists as moral priests, Reid provides practical steps for making ethics palatable and effective.</p><p>Reid and Kimberly discuss developing organizational muscle to reckon with moral nuance. Reid emphasizes that disagreement and uncertainty aren’t unique to ethics. Nor do squishy value statements make ethics squishy. Reid identifies a cocktail of motivations driving organization to engage, or not, in AI ethics. We also discuss the tendency for self-regulation to cede to market forces and the government’s role in ensuring access to basic human goods. Cautioning against overfitting an ethics program to AI alone, Reid illustrates the benefits of distinguishing digital ethics from ethics writ large. Last but not least, Reid considers how organizations may stitch together responses to the evolving regulatory patchwork.</p><p><a href="https://www.linkedin.com/in/reid-blackman/">Reid Blackman</a> is the author of “Ethical Machines” and the CEO of Virtue Consultants.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep29/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/reid-blackman/">Reid Blackman</a> confronts whack-a-mole approaches to AI ethics, ethical ‘do goodery,’ squishy values, moral nuance, advocacy vs. activism and overfitting for AI.</p><p>Reid distinguishes AI for ‘not bad’ from AI ‘for good’ and corporate social responsibility. He describes how the language of risk creates a bridge between ethics and business. Debunking the notion of ethicists as moral priests, Reid provides practical steps for making ethics palatable and effective.</p><p>Reid and Kimberly discuss developing organizational muscle to reckon with moral nuance. Reid emphasizes that disagreement and uncertainty aren’t unique to ethics. Nor do squishy value statements make ethics squishy. Reid identifies a cocktail of motivations driving organization to engage, or not, in AI ethics. We also discuss the tendency for self-regulation to cede to market forces and the government’s role in ensuring access to basic human goods. Cautioning against overfitting an ethics program to AI alone, Reid illustrates the benefits of distinguishing digital ethics from ethics writ large. Last but not least, Reid considers how organizations may stitch together responses to the evolving regulatory patchwork.</p><p><a href="https://www.linkedin.com/in/reid-blackman/">Reid Blackman</a> is the author of “Ethical Machines” and the CEO of Virtue Consultants.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep29/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 05 Apr 2023 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/c83e5481/b7807492.mp3" length="44309724" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/f_j5tGxMdlaesNe7qgja6l0d4-8QDqfCcYohtQnJWbs/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEyNTcwNTkv/MTY4MDExNTMxNS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2766</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/reid-blackman/">Reid Blackman</a> confronts whack-a-mole approaches to AI ethics, ethical ‘do goodery,’ squishy values, moral nuance, advocacy vs. activism and overfitting for AI.</p><p>Reid distinguishes AI for ‘not bad’ from AI ‘for good’ and corporate social responsibility. He describes how the language of risk creates a bridge between ethics and business. Debunking the notion of ethicists as moral priests, Reid provides practical steps for making ethics palatable and effective.</p><p>Reid and Kimberly discuss developing organizational muscle to reckon with moral nuance. Reid emphasizes that disagreement and uncertainty aren’t unique to ethics. Nor do squishy value statements make ethics squishy. Reid identifies a cocktail of motivations driving organization to engage, or not, in AI ethics. We also discuss the tendency for self-regulation to cede to market forces and the government’s role in ensuring access to basic human goods. Cautioning against overfitting an ethics program to AI alone, Reid illustrates the benefits of distinguishing digital ethics from ethics writ large. Last but not least, Reid considers how organizations may stitch together responses to the evolving regulatory patchwork.</p><p><a href="https://www.linkedin.com/in/reid-blackman/">Reid Blackman</a> is the author of “Ethical Machines” and the CEO of Virtue Consultants.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep29/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/reid-blackman" img="https://img.transistorcdn.com/uey3w1qRsnlurkFXP0Zbw0YWraVOXbljHjssD7s9lUA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vYmJlNTU2Yjgt/ODNjNC00ODIwLWI3/YTAtYjdiNGYwNzJh/OWRhLzE2ODAxMTUy/MjEtaW1hZ2UuanBn.jpg">Reid Blackman</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/c83e5481/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Generative AI: Unreal Realities with Ilke Demir</title>
      <itunes:episode>28</itunes:episode>
      <podcast:episode>28</podcast:episode>
      <itunes:title>Generative AI: Unreal Realities with Ilke Demir</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ff70df0d-090b-4185-9a1b-eeaeec5a2ca7</guid>
      <link>https://share.transistor.fm/s/3be50639</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/ilkedemir/">Ilke Demir</a> depicts the state of generative AI, deepfakes for good, the emotional shelf life of synthesized media, and methods to identify AI-generated content.</p><p>Ilke provides a primer on traditional generative models and generative AI. Outlining the fast-evolving capabilities of generative AI, she also notes their current lack of controls and transparency. Ilke then clarifies the term deepfake and highlights applications of ‘deepfakes for good.’</p><p>Ilke and Kimberly discuss whether the explosion of generated imagery creates an un-reality that sets ‘perfectly imperfect’ humans up for failure. An effervescent optimist, Ilke makes a compelling case that the true value of photos and art comes from our experiences and memories. She then provides a fascinating tour of emerging techniques to detect and indelibly identify generated media. Last but not least, Ilke affirms the need for greater public literacy and accountability by design.</p><p><a href="https://www.linkedin.com/in/ilkedemir/">Ilke Demir</a> is a Sr. Research Scientist at Intel. Her research team focuses on generative models for digitizing the real world, deepfake detection and generation techniques.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep28/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/ilkedemir/">Ilke Demir</a> depicts the state of generative AI, deepfakes for good, the emotional shelf life of synthesized media, and methods to identify AI-generated content.</p><p>Ilke provides a primer on traditional generative models and generative AI. Outlining the fast-evolving capabilities of generative AI, she also notes their current lack of controls and transparency. Ilke then clarifies the term deepfake and highlights applications of ‘deepfakes for good.’</p><p>Ilke and Kimberly discuss whether the explosion of generated imagery creates an un-reality that sets ‘perfectly imperfect’ humans up for failure. An effervescent optimist, Ilke makes a compelling case that the true value of photos and art comes from our experiences and memories. She then provides a fascinating tour of emerging techniques to detect and indelibly identify generated media. Last but not least, Ilke affirms the need for greater public literacy and accountability by design.</p><p><a href="https://www.linkedin.com/in/ilkedemir/">Ilke Demir</a> is a Sr. Research Scientist at Intel. Her research team focuses on generative models for digitizing the real world, deepfake detection and generation techniques.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep28/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 22 Mar 2023 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/3be50639/77274d52.mp3" length="48096942" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/6tQpO-WPVDZDjT3E3EDEgSs2TizTuyorOfTG742zUOQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEyNTY4MjUv/MTY3OTQyMDM2NS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3003</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/ilkedemir/">Ilke Demir</a> depicts the state of generative AI, deepfakes for good, the emotional shelf life of synthesized media, and methods to identify AI-generated content.</p><p>Ilke provides a primer on traditional generative models and generative AI. Outlining the fast-evolving capabilities of generative AI, she also notes their current lack of controls and transparency. Ilke then clarifies the term deepfake and highlights applications of ‘deepfakes for good.’</p><p>Ilke and Kimberly discuss whether the explosion of generated imagery creates an un-reality that sets ‘perfectly imperfect’ humans up for failure. An effervescent optimist, Ilke makes a compelling case that the true value of photos and art comes from our experiences and memories. She then provides a fascinating tour of emerging techniques to detect and indelibly identify generated media. Last but not least, Ilke affirms the need for greater public literacy and accountability by design.</p><p><a href="https://www.linkedin.com/in/ilkedemir/">Ilke Demir</a> is a Sr. Research Scientist at Intel. Her research team focuses on generative models for digitizing the real world, deepfake detection and generation techniques.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/ep28/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://speakers.acm.org/speakers/demir_14351" img="https://img.transistorcdn.com/7v_s2X8XEZ65fFntMFilRVI1ZkP7YjdGhqH2JFivrAA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODQ1ZWRhOWQt/MjE1NC00YjhiLTg2/MzQtNzA4MDQyZDIy/ODBmLzE2Nzg5Njgz/NDAtaW1hZ2UuanBn.jpg">Ilke Demir</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/3be50639/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Plain Talk About Talking AI with J Mark Bishop</title>
      <itunes:episode>27</itunes:episode>
      <podcast:episode>27</podcast:episode>
      <itunes:title>Plain Talk About Talking AI with J Mark Bishop</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">878d98b2-69e6-4da7-a16a-00dd562e0104</guid>
      <link>https://share.transistor.fm/s/43e481d9</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/profjmarkbishop/">Professor J Mark Bishop</a> reflects on the trickiness of language, how LLMs work, why ChatGPT can’t understand, the nature of AI and emerging theories of mind.</p><p>Mark explains what large language models (LLM) do and provides a quasi-technical overview of how they work. He also exposes the complications inherent in comprehending language. Mark calls for more philosophical analysis of how systems such as GPT-3 and ChatGPT replicate human knowledge. Yet, understand nothing. Noting the astonishing outputs resulting from more or less auto-completing large blocks of text, Mark cautions against being taken in by LLM’s disarming façade.</p><p>Mark then explains the basis of the Chinese Room thought experiment and the hotly debated conclusion that computation does not lead to semantic understanding. Kimberly and Mark discuss the nature of learning through the eyes of a child and whether computational systems can ever be conscious. Mark describes the phenomenal experience of understanding (aka what it feels likes). And how non-computational theories of mind may influence AI development. Finally, Mark reflects on whether AI will be good for the few or the many.</p><p><a href="https://www.linkedin.com/in/profjmarkbishop/">Professor J Mark Bishop</a> is the Professor of Cognitive Computing (Emeritus) at Goldsmith College, University of London and Scientific Advisor to FACT360.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/e27/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/profjmarkbishop/">Professor J Mark Bishop</a> reflects on the trickiness of language, how LLMs work, why ChatGPT can’t understand, the nature of AI and emerging theories of mind.</p><p>Mark explains what large language models (LLM) do and provides a quasi-technical overview of how they work. He also exposes the complications inherent in comprehending language. Mark calls for more philosophical analysis of how systems such as GPT-3 and ChatGPT replicate human knowledge. Yet, understand nothing. Noting the astonishing outputs resulting from more or less auto-completing large blocks of text, Mark cautions against being taken in by LLM’s disarming façade.</p><p>Mark then explains the basis of the Chinese Room thought experiment and the hotly debated conclusion that computation does not lead to semantic understanding. Kimberly and Mark discuss the nature of learning through the eyes of a child and whether computational systems can ever be conscious. Mark describes the phenomenal experience of understanding (aka what it feels likes). And how non-computational theories of mind may influence AI development. Finally, Mark reflects on whether AI will be good for the few or the many.</p><p><a href="https://www.linkedin.com/in/profjmarkbishop/">Professor J Mark Bishop</a> is the Professor of Cognitive Computing (Emeritus) at Goldsmith College, University of London and Scientific Advisor to FACT360.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/e27/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 08 Mar 2023 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/43e481d9/84cff8ac.mp3" length="63914349" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/36tSxc8-OsjM6zwhGmtX9hrJoczXZH28XmDUmrR7UsY/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEyMjEwMzEv/MTY3ODExNjg0OS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3992</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/profjmarkbishop/">Professor J Mark Bishop</a> reflects on the trickiness of language, how LLMs work, why ChatGPT can’t understand, the nature of AI and emerging theories of mind.</p><p>Mark explains what large language models (LLM) do and provides a quasi-technical overview of how they work. He also exposes the complications inherent in comprehending language. Mark calls for more philosophical analysis of how systems such as GPT-3 and ChatGPT replicate human knowledge. Yet, understand nothing. Noting the astonishing outputs resulting from more or less auto-completing large blocks of text, Mark cautions against being taken in by LLM’s disarming façade.</p><p>Mark then explains the basis of the Chinese Room thought experiment and the hotly debated conclusion that computation does not lead to semantic understanding. Kimberly and Mark discuss the nature of learning through the eyes of a child and whether computational systems can ever be conscious. Mark describes the phenomenal experience of understanding (aka what it feels likes). And how non-computational theories of mind may influence AI development. Finally, Mark reflects on whether AI will be good for the few or the many.</p><p><a href="https://www.linkedin.com/in/profjmarkbishop/">Professor J Mark Bishop</a> is the Professor of Cognitive Computing (Emeritus) at Goldsmith College, University of London and Scientific Advisor to FACT360.</p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/e27/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/professor-j-mark-bishop" img="https://img.transistorcdn.com/SXQABzYRVdCpo5ygS3lZKdJxzI_ffSqh2shFCuMXNJ8/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vOGJjMGExY2Et/NWVkNC00ZjMwLWJm/M2EtZTkxYzk0YzIy/Nzk5LzE2NzgxMTY3/NjktaW1hZ2UuanBn.jpg">Professor J Mark Bishop</podcast:person>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/43e481d9/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>In AI We Trust with Chris McClean</title>
      <itunes:episode>26</itunes:episode>
      <podcast:episode>26</podcast:episode>
      <itunes:title>In AI We Trust with Chris McClean</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bbd272f0-725c-4621-b748-f5730168dbd4</guid>
      <link>https://share.transistor.fm/s/aff04091</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/chris-mcclean/">Chris McClean</a> reflects on ethics vs. risk, ethically positive outcomes, the nature of trust, looking beyond ourselves, privacy at work and in the metaverse.</p><p>Chris outlines the key differences between digital ethics and risk management. He emphasizes the discovery of positive outcomes as well as harms and where a data-driven approach can fall short. From there, Chris outlines a comprehensive digital ethics framework and why starting with impact is key. He then describes a pragmatic approach for making ethics accessible without sacrificing rigor.</p><p>Kimberly and Chris discuss the definition of trust, the myriad reasons we might trust someone or something, and why trust isn’t set-it-and-forget-it. From your smart doorbell to self-driving cars and social services, Chris argues persuasively for looking beyond ‘how does this affect me.’ Highlighting Eunice Kyereme’s work on digital makers and takers, Chris describes the role we each play – however unwittingly – in creating the digital ecosystem. We then discuss surveillance vs. monitoring in the workplace and the potential for great good and abuse inherent in the Metaverse. Finally, Chris stresses that ethically positive outcomes go beyond ‘tech for good’ and that ethics is good business.</p><p><a href="https://www.linkedin.com/in/chris-mcclean/">Chris McClean</a> is the Global Head of Digital Ethics at Avanade and a PhD candidate in Applied Ethics at the University of Leeds. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/e26/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/chris-mcclean/">Chris McClean</a> reflects on ethics vs. risk, ethically positive outcomes, the nature of trust, looking beyond ourselves, privacy at work and in the metaverse.</p><p>Chris outlines the key differences between digital ethics and risk management. He emphasizes the discovery of positive outcomes as well as harms and where a data-driven approach can fall short. From there, Chris outlines a comprehensive digital ethics framework and why starting with impact is key. He then describes a pragmatic approach for making ethics accessible without sacrificing rigor.</p><p>Kimberly and Chris discuss the definition of trust, the myriad reasons we might trust someone or something, and why trust isn’t set-it-and-forget-it. From your smart doorbell to self-driving cars and social services, Chris argues persuasively for looking beyond ‘how does this affect me.’ Highlighting Eunice Kyereme’s work on digital makers and takers, Chris describes the role we each play – however unwittingly – in creating the digital ecosystem. We then discuss surveillance vs. monitoring in the workplace and the potential for great good and abuse inherent in the Metaverse. Finally, Chris stresses that ethically positive outcomes go beyond ‘tech for good’ and that ethics is good business.</p><p><a href="https://www.linkedin.com/in/chris-mcclean/">Chris McClean</a> is the Global Head of Digital Ethics at Avanade and a PhD candidate in Applied Ethics at the University of Leeds. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/e26/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 22 Feb 2023 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/aff04091/9f74164f.mp3" length="42126517" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/OFKgOVrhByoNCOdXkHzeEhqn248AypeEDV9UFn-agoI/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEyMDg4Mzgv/MTY3NjU2Mzc0Ni1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2630</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/chris-mcclean/">Chris McClean</a> reflects on ethics vs. risk, ethically positive outcomes, the nature of trust, looking beyond ourselves, privacy at work and in the metaverse.</p><p>Chris outlines the key differences between digital ethics and risk management. He emphasizes the discovery of positive outcomes as well as harms and where a data-driven approach can fall short. From there, Chris outlines a comprehensive digital ethics framework and why starting with impact is key. He then describes a pragmatic approach for making ethics accessible without sacrificing rigor.</p><p>Kimberly and Chris discuss the definition of trust, the myriad reasons we might trust someone or something, and why trust isn’t set-it-and-forget-it. From your smart doorbell to self-driving cars and social services, Chris argues persuasively for looking beyond ‘how does this affect me.’ Highlighting Eunice Kyereme’s work on digital makers and takers, Chris describes the role we each play – however unwittingly – in creating the digital ecosystem. We then discuss surveillance vs. monitoring in the workplace and the potential for great good and abuse inherent in the Metaverse. Finally, Chris stresses that ethically positive outcomes go beyond ‘tech for good’ and that ethics is good business.</p><p><a href="https://www.linkedin.com/in/chris-mcclean/">Chris McClean</a> is the Global Head of Digital Ethics at Avanade and a PhD candidate in Applied Ethics at the University of Leeds. </p><p>A transcript of this episode is <a href="https://pondering-ai.transistor.fm/episodes/e26/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/chris-mcclean" img="https://img.transistorcdn.com/hGFY24UxlkfaLGpUuxli0hQlUn1HeXHj-gdM16B2DWg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vOTA0NjYzNDgt/YzdkMC00YzIyLTli/MmMtYzZjNDM5NjY3/YmMzLzE2NzY1NjM1/MDQtaW1hZ2UuanBn.jpg">Chris McClean</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/aff04091/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>AI for Sustainable Development with Henrik Skaug Sætra</title>
      <itunes:episode>25</itunes:episode>
      <podcast:episode>25</podcast:episode>
      <itunes:title>AI for Sustainable Development with Henrik Skaug Sætra</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c1059309-685c-41cf-a224-e274d7f867db</guid>
      <link>https://share.transistor.fm/s/2ab22f68</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/henriksaetra/">Henrik Skaug Sætra</a> contends humans aren’t mere machines, assesses AI thru a sustainable development lens and weighs the effect of political imbalances and ESG.</p><p>Henrik embraces human complexity. He advises against applying AI to naturally messy problems or to influence populations least able to resist. Henrik outlines how the UN Sustainable Development Goals (SDG) can identify beneficial and marketable avenues for AI. He also describes SDG’s usefulness in ethical impact assessment. Championing affordable and equitable access to technology, Henrik shows how disparate impacts occur between individuals, groups and society. Along the way, Kimberly and Henrik discuss political imbalances, the technocratic nature of emerging regulations and why we shouldn’t expect corporations to be broadly ethical of their own accord. Outlining his AI ESG protocol, Henrik surmises that qualitative rigor can address gaps in quantitative analysis alone. Finally, Henrik encourages the proactive use of SDGs and ESG to drive innovation and opportunity.</p><p>Henrik is Head of the Digital Society and an Associate Professor at Østfold University College. He is a political theorist focusing on the political, ethical, and social implications of technology.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep25/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/henriksaetra/">Henrik Skaug Sætra</a> contends humans aren’t mere machines, assesses AI thru a sustainable development lens and weighs the effect of political imbalances and ESG.</p><p>Henrik embraces human complexity. He advises against applying AI to naturally messy problems or to influence populations least able to resist. Henrik outlines how the UN Sustainable Development Goals (SDG) can identify beneficial and marketable avenues for AI. He also describes SDG’s usefulness in ethical impact assessment. Championing affordable and equitable access to technology, Henrik shows how disparate impacts occur between individuals, groups and society. Along the way, Kimberly and Henrik discuss political imbalances, the technocratic nature of emerging regulations and why we shouldn’t expect corporations to be broadly ethical of their own accord. Outlining his AI ESG protocol, Henrik surmises that qualitative rigor can address gaps in quantitative analysis alone. Finally, Henrik encourages the proactive use of SDGs and ESG to drive innovation and opportunity.</p><p>Henrik is Head of the Digital Society and an Associate Professor at Østfold University College. He is a political theorist focusing on the political, ethical, and social implications of technology.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep25/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 08 Feb 2023 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/2ab22f68/6092c3f0.mp3" length="38817452" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/gFTjAjqbLPGuX6mmrHPYjk-PltQ9Rzl87-830eK4KFg/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzExOTA1MDAv/MTY3NTY5Mzg5OS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2423</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/henriksaetra/">Henrik Skaug Sætra</a> contends humans aren’t mere machines, assesses AI thru a sustainable development lens and weighs the effect of political imbalances and ESG.</p><p>Henrik embraces human complexity. He advises against applying AI to naturally messy problems or to influence populations least able to resist. Henrik outlines how the UN Sustainable Development Goals (SDG) can identify beneficial and marketable avenues for AI. He also describes SDG’s usefulness in ethical impact assessment. Championing affordable and equitable access to technology, Henrik shows how disparate impacts occur between individuals, groups and society. Along the way, Kimberly and Henrik discuss political imbalances, the technocratic nature of emerging regulations and why we shouldn’t expect corporations to be broadly ethical of their own accord. Outlining his AI ESG protocol, Henrik surmises that qualitative rigor can address gaps in quantitative analysis alone. Finally, Henrik encourages the proactive use of SDGs and ESG to drive innovation and opportunity.</p><p>Henrik is Head of the Digital Society and an Associate Professor at Østfold University College. He is a political theorist focusing on the political, ethical, and social implications of technology.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep25/transcript">here</a>. </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/henrik-skaug-saetra" img="https://img.transistorcdn.com/hY8GiGYlMIs3oiAcDwdUFUj2JgrvWdAzZLa0sjah4u4/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vOTQ1ZGY5MGUt/YjMyNy00NjZkLThk/OGQtZTNiYmRkMTU1/NzNiLzE2NzU2OTM4/MjYtaW1hZ2UuanBn.jpg">Henrik Skaug Sætra</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/2ab22f68/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Philosophy of AI with Dr. Mark Coeckelbergh</title>
      <itunes:episode>24</itunes:episode>
      <podcast:episode>24</podcast:episode>
      <itunes:title>The Philosophy of AI with Dr. Mark Coeckelbergh</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a322872e-926d-4bb4-9389-bfd84c4bc517</guid>
      <link>https://share.transistor.fm/s/6b027081</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/markcoeckelbergh/">Dr. Mark Coeckelbergh</a> is a Professor of Philosophy of Media and Technology, a member of the High-Level Expert Group on Artificial Intelligence (EC) and the Austrian Council on Robotics and AI.</p><p>In this insightful discussion, Mark explains why AI systems are not merely tools or strictly rational endeavors. He describes the challenges created when AI systems imitate human capabilities and how human sciences help address the messy realities of AI. Mark also demonstrates how political philosophy makes conversations about multidimensional topics such as bias, fairness and freedom more productive. Kimberly and Mark discuss the difficulty with global governance, the role of scientific expertise and technology in society, and the need for political imagination to govern emerging technologies such as AI. Along the way, Mark illustrates the debate about how AI systems could vs. should be used through the lens of gun control and climate change. Finally, Mark sounds a cautionary note about the potential for AI to undermine our fragile democratic institutions.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep24/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/markcoeckelbergh/">Dr. Mark Coeckelbergh</a> is a Professor of Philosophy of Media and Technology, a member of the High-Level Expert Group on Artificial Intelligence (EC) and the Austrian Council on Robotics and AI.</p><p>In this insightful discussion, Mark explains why AI systems are not merely tools or strictly rational endeavors. He describes the challenges created when AI systems imitate human capabilities and how human sciences help address the messy realities of AI. Mark also demonstrates how political philosophy makes conversations about multidimensional topics such as bias, fairness and freedom more productive. Kimberly and Mark discuss the difficulty with global governance, the role of scientific expertise and technology in society, and the need for political imagination to govern emerging technologies such as AI. Along the way, Mark illustrates the debate about how AI systems could vs. should be used through the lens of gun control and climate change. Finally, Mark sounds a cautionary note about the potential for AI to undermine our fragile democratic institutions.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep24/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 03 Aug 2022 05:30:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/6b027081/c8aae6e3.mp3" length="37717662" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/88_fHamz9cpxOtqWeW6Pwq181biL9ePr_WwS7YuYxR0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzk0MDQ1Ny8x/NjU5NDQ3OTEyLWFy/dHdvcmsuanBn.jpg"/>
      <itunes:duration>2353</itunes:duration>
      <itunes:summary>Dr. Mark Coeckelbergh contemplates the messy reality and political nature of AI, the interplay of technology with society, and the impact of AI on democracy.</itunes:summary>
      <itunes:subtitle>Dr. Mark Coeckelbergh contemplates the messy reality and political nature of AI, the interplay of technology with society, and the impact of AI on democracy.</itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/mark-coeckelbergh" img="https://img.transistorcdn.com/i_81xwq3BoD3SZDtLepQAiIc1SumvklsXHdUioKMRzE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vYWRhMDMwMzEt/MTM4OS00Y2EyLTg2/ZjctYTEwNjExNDdm/ZTdhLzE2NzMzNzk3/NTUtaW1hZ2UuanBn.jpg">Mark Coeckelbergh</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/6b027081/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Keeping Science in Data Science with Patrick Hall</title>
      <itunes:episode>23</itunes:episode>
      <podcast:episode>23</podcast:episode>
      <itunes:title>Keeping Science in Data Science with Patrick Hall</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">aa1e76b1-062c-41f0-83b1-e4dcdaefbf44</guid>
      <link>https://share.transistor.fm/s/9015fb03</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/jpatrickhall/">Patrick Hall</a> is the Principal Scientist at bnh.ai.</p><p>Patrick artfully illustrates how data science has become divorced from scientific rigor. At least, that is, in popular conceptions of the practice. Kimberly and Patrick discuss the pernicious influence of the McNamara Fallacy, applying the scientific method to algorithmic development and keeping an open mind without sacrificing concept validity. Patrick addresses the recent hubbub around AI sentience, cautions against using AI in social contexts and identifies the problems AI algorithms are best suited to solve. Noting AI is no different than any other mission-critical software, he outlines the investment and oversight required for AI programs to deliver value. Patrick promotes managing AI systems like products and makes the case for why performance in the lab should not be the first priority.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep23/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/jpatrickhall/">Patrick Hall</a> is the Principal Scientist at bnh.ai.</p><p>Patrick artfully illustrates how data science has become divorced from scientific rigor. At least, that is, in popular conceptions of the practice. Kimberly and Patrick discuss the pernicious influence of the McNamara Fallacy, applying the scientific method to algorithmic development and keeping an open mind without sacrificing concept validity. Patrick addresses the recent hubbub around AI sentience, cautions against using AI in social contexts and identifies the problems AI algorithms are best suited to solve. Noting AI is no different than any other mission-critical software, he outlines the investment and oversight required for AI programs to deliver value. Patrick promotes managing AI systems like products and makes the case for why performance in the lab should not be the first priority.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep23/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 20 Jul 2022 05:30:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/9015fb03/73ff37b6.mp3" length="37355250" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/MqhSIzQBvAS5YTEvFZcP__Vfc3dLYSH4VerRv0h3PV4/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzk0MDQ1Mi8x/NjU4Mjg4MDg3LWFy/dHdvcmsuanBn.jpg"/>
      <itunes:duration>2330</itunes:duration>
      <itunes:summary>Patrick Hall challenges data science norms, warns against magical thinking and malleable hypotheses, reflects on human-AI teaming and delivering value with AI.</itunes:summary>
      <itunes:subtitle>Patrick Hall challenges data science norms, warns against magical thinking and malleable hypotheses, reflects on human-AI teaming and delivering value with AI.</itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/patrick-hall" img="https://img.transistorcdn.com/VhuTvkmAtrtYLG9wiM4AvCRDJcbXp15T_fQlEZCeGxk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vZmJmYTNjOTYt/NjhmOC00MDk0LWE0/NDItMzMzZmNjNDkz/YmZjLzE2NzMzNzk3/MTUtaW1hZ2UuanBn.jpg">Patrick Hall</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/9015fb03/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Synthesizing the Future with Fernando Lucini </title>
      <itunes:episode>22</itunes:episode>
      <podcast:episode>22</podcast:episode>
      <itunes:title>Synthesizing the Future with Fernando Lucini </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bb332c0a-d18b-4304-82a1-e197613c73f0</guid>
      <link>https://share.transistor.fm/s/e56b250f</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/fernandolucini/?originalSubdomain=uk">Fernando Lucini</a> is the Global Data Science &amp; ML Engineering Lead (aka Chief Data Scientist) at Accenture.</p><p>Fernando Lucini outlines common uses for AI generated synthetic data. He emphasizes that synthetic data is a facsimile – close, but not <em>quite</em> real - and debunks the notion it is inherently private. Kimberly and Fernando discuss the potential pitfalls in synthetic data sets, the emergent need for standard controls, and why ensuring quality - much less fairness - is not simple. Fernando assesses the current state of the synthetic data market and the work still to be done to enable broad-scale adoption. Tipping his hat to fabulous achievements such as GPT-3 and Dall-E, Fernando identifies multiple ways synthetic data can be used for good works and creative endeavors.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep22/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/fernandolucini/?originalSubdomain=uk">Fernando Lucini</a> is the Global Data Science &amp; ML Engineering Lead (aka Chief Data Scientist) at Accenture.</p><p>Fernando Lucini outlines common uses for AI generated synthetic data. He emphasizes that synthetic data is a facsimile – close, but not <em>quite</em> real - and debunks the notion it is inherently private. Kimberly and Fernando discuss the potential pitfalls in synthetic data sets, the emergent need for standard controls, and why ensuring quality - much less fairness - is not simple. Fernando assesses the current state of the synthetic data market and the work still to be done to enable broad-scale adoption. Tipping his hat to fabulous achievements such as GPT-3 and Dall-E, Fernando identifies multiple ways synthetic data can be used for good works and creative endeavors.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep22/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 06 Jul 2022 06:30:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/e56b250f/07c247d6.mp3" length="40652868" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/CZrrEl9ucAmY0-x02snB7nb6FZWG3-8-VW8UzX90uCc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzkzMzMxOS8x/NjU3MDQ3NjA4LWFy/dHdvcmsuanBn.jpg"/>
      <itunes:duration>2537</itunes:duration>
      <itunes:summary>Fernando Lucini explains the potential applications, pitfalls, and work still to be done to make synthetic data ubiquitous.</itunes:summary>
      <itunes:subtitle>Fernando Lucini explains the potential applications, pitfalls, and work still to be done to make synthetic data ubiquitous.</itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/fernando-lucini" img="https://img.transistorcdn.com/Ae4Pn8SzGbh64wkA1k5361gCVrEDR_MQy0DyKijuFGk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vOTVmYTdlYTYt/NDI1NS00Y2M4LTlh/YTItZTRhNDZlYTAy/MjhhLzE2NzMzNzk2/NzgtaW1hZ2UuanBn.jpg">Fernando Lucini</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/e56b250f/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Future of Human Decision Making with Roger Spitz </title>
      <itunes:episode>21</itunes:episode>
      <podcast:episode>21</podcast:episode>
      <itunes:title>The Future of Human Decision Making with Roger Spitz </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7ba16412-e73f-480a-95a9-60a9f2766e42</guid>
      <link>https://share.transistor.fm/s/25196096</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/rogerspitz">Roger Spitz</a> is the CEO of Techistential and Chairman of the Disruptive Futures Institute.</p><p>In this thought-provoking discussion, Roger discusses why neither humans nor AI systems are great at decision making in complex environments. But why humans <em>should</em> be. Roger unveils the insidious influence of AI systems on human decisions and why uncertainty is a pre-requisite for human choice, freedom, and agency. Kimberly and Roger discuss the implications of complexity, the rising cost of poor assumptions, and the dangerous allure of delegating too many decisions to AI-enabled machines. Outlining the AAA (antifragile, anticipatory, agile) model for decision-making in the face of deep uncertainty, Roger differentiates foresight from strategic planning and anticipatory agility from ‘move fast and break things.’ Last but not least, Roger argues that current educational incentives run counter to nurturing the mindset and skills needed to thrive in our increasingly complex, emergent world.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep21/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/rogerspitz">Roger Spitz</a> is the CEO of Techistential and Chairman of the Disruptive Futures Institute.</p><p>In this thought-provoking discussion, Roger discusses why neither humans nor AI systems are great at decision making in complex environments. But why humans <em>should</em> be. Roger unveils the insidious influence of AI systems on human decisions and why uncertainty is a pre-requisite for human choice, freedom, and agency. Kimberly and Roger discuss the implications of complexity, the rising cost of poor assumptions, and the dangerous allure of delegating too many decisions to AI-enabled machines. Outlining the AAA (antifragile, anticipatory, agile) model for decision-making in the face of deep uncertainty, Roger differentiates foresight from strategic planning and anticipatory agility from ‘move fast and break things.’ Last but not least, Roger argues that current educational incentives run counter to nurturing the mindset and skills needed to thrive in our increasingly complex, emergent world.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep21/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 22 Jun 2022 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/25196096/4f95051a.mp3" length="44071863" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/fB2NNU8jGDUjMm-6l4TFeuDdzm2k4KclYJA7aUIsEeY/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzkwMTUwOC8x/NjU1OTIwMjQ3LWFy/dHdvcmsuanBn.jpg"/>
      <itunes:duration>2750</itunes:duration>
      <itunes:summary>Roger Spitz reconciles strategy with philosophy, contemplates the influence of AI systems and the skills required to make decisions in the face of uncertainty.</itunes:summary>
      <itunes:subtitle>Roger Spitz reconciles strategy with philosophy, contemplates the influence of AI systems and the skills required to make decisions in the face of uncertainty.</itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/roger-spitz" img="https://img.transistorcdn.com/awxpCqhzAWdyD0PzUEWre_v-s2I_CO7vP14kA8w1I_k/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vNTY3N2FhM2Et/YjA3NC00ODM2LTlj/ZGItZjMxNDE2Mjgx/OTc5LzE2NzMzNzg0/NDYtaW1hZ2UuanBn.jpg">Roger Spitz</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/25196096/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Risk vs. Rights in AI with Dorothea Baur</title>
      <itunes:episode>20</itunes:episode>
      <podcast:episode>20</podcast:episode>
      <itunes:title>Risk vs. Rights in AI with Dorothea Baur</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">20764715-125e-4ad2-904f-ce898494bd15</guid>
      <link>https://share.transistor.fm/s/85787ca8</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/dorotheabaur/">Dr. Dorothea Baur</a> is an ethicist and independent consultant on the topics of ethics, responsibility and sustainability in tech and finance.</p><p>Dorothea debunks common ethical misconceptions and explores the novel issues that arise when applying ethics to technology. Kimberly and Dorothea discuss the risks posed by risk management-based approaches to tech ethics. As well as the “unholy collision” between the pursuit of scale and universal generalization. Dorothea reluctantly gives a nod to Milton Friedman when linking ethics to material business outcomes. Along the way, Dorothea illustrates how stakeholder engagement is evolving and the power of the employee. Noting that algorithms do not have agency and will never be ethical, Dorothea persuasively articulates our moral responsibility to retain responsibility for our AI creations.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep20/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/dorotheabaur/">Dr. Dorothea Baur</a> is an ethicist and independent consultant on the topics of ethics, responsibility and sustainability in tech and finance.</p><p>Dorothea debunks common ethical misconceptions and explores the novel issues that arise when applying ethics to technology. Kimberly and Dorothea discuss the risks posed by risk management-based approaches to tech ethics. As well as the “unholy collision” between the pursuit of scale and universal generalization. Dorothea reluctantly gives a nod to Milton Friedman when linking ethics to material business outcomes. Along the way, Dorothea illustrates how stakeholder engagement is evolving and the power of the employee. Noting that algorithms do not have agency and will never be ethical, Dorothea persuasively articulates our moral responsibility to retain responsibility for our AI creations.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep20/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 08 Jun 2022 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/85787ca8/a804d868.mp3" length="35108627" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/8dHZhCBHwQaoh0f4kD8VbZd3dfCgJ7n1L8pOy-aoDKM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzg5NTUwOC8x/NjU0NjA5Nzc2LWFy/dHdvcmsuanBn.jpg"/>
      <itunes:duration>2194</itunes:duration>
      <itunes:summary>Dr. Dorothea Baur addresses ethical myths, unique issues posed by AI, universal rights, stakeholder advocacy and taking responsibility for our tech creations.</itunes:summary>
      <itunes:subtitle>Dr. Dorothea Baur addresses ethical myths, unique issues posed by AI, universal rights, stakeholder advocacy and taking responsibility for our tech creations.</itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/dr-dorothea-baur" img="https://img.transistorcdn.com/DUCrYP88jV7uEPv6867NYO6ZW5tIMEo2yzF5psKnIss/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vYzc0MzRmNzgt/NDEzNi00ODAyLWFh/ZDgtMmYxODEyNjA5/YmMyLzE2NzMzNzgz/OTYtaW1hZ2UuanBn.jpg">Dr. Dorothea Baur</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/85787ca8/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>In AI We Trust with Marisa Tschopp</title>
      <itunes:episode>19</itunes:episode>
      <podcast:episode>19</podcast:episode>
      <itunes:title>In AI We Trust with Marisa Tschopp</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4b90a99b-43fd-478b-988a-1c79f055aee5</guid>
      <link>https://share.transistor.fm/s/0b86bf31</link>
      <description>
        <![CDATA[<p><a href="https://www.scip.ch/en/?team.mats">Marisa Tschopp</a> is a Human-AI interaction researcher at scip AG and Co-Chair of the IEEE Agency and Trust in AI Systems Committee.</p><p>Marisa answers the question ‘what is trust?' and compares trust between humans to trust in a machine. Differentiating trust from trustworthiness, Marisa emphasizes the importance of considering the context and motivation behind AI systems. Kimberly and Marisa discuss the pros and cons of endowing AI systems with human characteristics (aka anthropomorphizing) and why ‘do you trust AI?’ is the wrong question. Debunking the concept of ‘The AI’, Marisa outlines practices for calibrating trust in AI systems. A self-described skeptical optimist, Marisa also shares her research into how people perceive their relationships with AI-enabled machines and how these patterns may change over time.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep19/transcript">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.scip.ch/en/?team.mats">Marisa Tschopp</a> is a Human-AI interaction researcher at scip AG and Co-Chair of the IEEE Agency and Trust in AI Systems Committee.</p><p>Marisa answers the question ‘what is trust?' and compares trust between humans to trust in a machine. Differentiating trust from trustworthiness, Marisa emphasizes the importance of considering the context and motivation behind AI systems. Kimberly and Marisa discuss the pros and cons of endowing AI systems with human characteristics (aka anthropomorphizing) and why ‘do you trust AI?’ is the wrong question. Debunking the concept of ‘The AI’, Marisa outlines practices for calibrating trust in AI systems. A self-described skeptical optimist, Marisa also shares her research into how people perceive their relationships with AI-enabled machines and how these patterns may change over time.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep19/transcript">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 25 May 2022 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/0b86bf31/c3fa644c.mp3" length="37731410" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/4Qc3avQ3PVmIiAMzIzby4j3aVkLu5ycn_jotQkkS63E/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzg4NTY4Mi8x/NjUzNDEwOTY3LWFy/dHdvcmsuanBn.jpg"/>
      <itunes:duration>2354</itunes:duration>
      <itunes:summary>Marisa Tschopp contemplates trusting a human versus a machine, the risks in humanizing AI, and how we characterize our relationships with AI-enabled conversational systems.</itunes:summary>
      <itunes:subtitle>Marisa Tschopp contemplates trusting a human versus a machine, the risks in humanizing AI, and how we characterize our relationships with AI-enabled conversational systems.</itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://www.scip.ch/en/?team.mats" img="https://img.transistorcdn.com/8EezJ9QOT9aFXeQd0vPx2duaCA3m2lJf1rW6qQkUikU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDFlZDgxMzgt/NmIxMy00MmJiLThi/ZmQtZWRlZmFmZmYx/NTM2LzE2NzMzNzgy/OTYtaW1hZ2UuanBn.jpg">Marisa Tschopp</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/0b86bf31/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>AI’s World View with Dr. Erica Thompson </title>
      <itunes:episode>18</itunes:episode>
      <podcast:episode>18</podcast:episode>
      <itunes:title>AI’s World View with Dr. Erica Thompson </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">78988084-e532-41b6-a56b-c7c1b1a7555d</guid>
      <link>https://share.transistor.fm/s/ae9845c3</link>
      <description>
        <![CDATA[<p><a href="https://www.lse.ac.uk/CATS/People/Erica-Thompson">Dr Erica Thompson</a> is a Senior Policy Fellow in Ethics of Modelling and Simulation at the LSE Data Science Institute.</p><p>Using the trusty-ish weather forecast as a starting point, Erica highlights the gaps to be minded when applying models in real-life. Kimberly and Erica discuss the role of expert judgement and intuition, the orthodoxy of data-driven cultures, models as engines not cameras, and why exposing uncertainty improves decision-making. Erica illustrates why it is so easy to become overconfident in models. She shows how value judgements are embedded in every step of model development (and hidden in math), why chameleons and accountability don’t mix, and considerations for using model outputs to think or decide effectively. Looking forward, Erica foresees a future in which values rather than data drive decision-making.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep18/transcript">here</a>. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.lse.ac.uk/CATS/People/Erica-Thompson">Dr Erica Thompson</a> is a Senior Policy Fellow in Ethics of Modelling and Simulation at the LSE Data Science Institute.</p><p>Using the trusty-ish weather forecast as a starting point, Erica highlights the gaps to be minded when applying models in real-life. Kimberly and Erica discuss the role of expert judgement and intuition, the orthodoxy of data-driven cultures, models as engines not cameras, and why exposing uncertainty improves decision-making. Erica illustrates why it is so easy to become overconfident in models. She shows how value judgements are embedded in every step of model development (and hidden in math), why chameleons and accountability don’t mix, and considerations for using model outputs to think or decide effectively. Looking forward, Erica foresees a future in which values rather than data drive decision-making.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep18/transcript">here</a>. </p>]]>
      </content:encoded>
      <pubDate>Wed, 11 May 2022 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/ae9845c3/f27338de.mp3" length="39388560" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/1AUF0NY9UOi8aOqyYYHqJPczPVKxWkHMTlRXydXUFe8/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzg4NTY4My8x/NjUyMjA4NzI3LWFy/dHdvcmsuanBn.jpg"/>
      <itunes:duration>2458</itunes:duration>
      <itunes:summary>Dr Erica Thompson exposes the seductive allure of model land: a place where life is simply predictable and all your assumptions are true.</itunes:summary>
      <itunes:subtitle>Dr Erica Thompson exposes the seductive allure of model land: a place where life is simply predictable and all your assumptions are true.</itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/dr-erica-thompson" img="https://img.transistorcdn.com/98JKWnR4GyHa_D_YsGcS9KiTnpUL0Ud1rYjRIix5Vdg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMTI4NTY5NzIt/Y2FjZC00YjY4LWJh/NGQtMDNkNmU1ODg0/MGM4LzE2NzMzNzgx/MzEtaW1hZ2UuanBn.jpg">Dr Erica Thompson</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/ae9845c3/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Designing for Human Experience with Sheryl Cababa </title>
      <itunes:episode>17</itunes:episode>
      <podcast:episode>17</podcast:episode>
      <itunes:title>Designing for Human Experience with Sheryl Cababa </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f523a621-3571-489b-ad1f-1c55dd26e8a8</guid>
      <link>https://share.transistor.fm/s/32baffc0</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/sherylcababa/">Sheryl Cababa</a> is the Chief Design Officer at Substantial where she conducts research, develops design strategies and advocates for human-centric outcomes.</p><p>From the infinite scroll to Twitter edits, Sheryl illustrates how current design practices unwittingly undermine human agency. Often while delivering <em>exactly</em> what a user wants. She refutes the need to categorically eliminate the term ‘users’ while showing how a singular user focus has led us astray. Sheryl then outlines how systems thinking can reorient existing design practices toward human-centric outcomes. Along the way, Kimberly and Sheryl discuss the limits of empathy, the evolving ethos of unintended consequences and embracing nuance. While acknowledging the challenges ahead, Sheryl remains optimistic about our ability to design for human well-being not just expediency or profit.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep17/transcript">here</a>. </p><p>Our next episode explores the limits of model land with <a href="https://www.lse.ac.uk/CATS/People/Erica-Thompson">Dr Erica Thompson</a>. Subscribe now so you don’t miss it.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/sherylcababa/">Sheryl Cababa</a> is the Chief Design Officer at Substantial where she conducts research, develops design strategies and advocates for human-centric outcomes.</p><p>From the infinite scroll to Twitter edits, Sheryl illustrates how current design practices unwittingly undermine human agency. Often while delivering <em>exactly</em> what a user wants. She refutes the need to categorically eliminate the term ‘users’ while showing how a singular user focus has led us astray. Sheryl then outlines how systems thinking can reorient existing design practices toward human-centric outcomes. Along the way, Kimberly and Sheryl discuss the limits of empathy, the evolving ethos of unintended consequences and embracing nuance. While acknowledging the challenges ahead, Sheryl remains optimistic about our ability to design for human well-being not just expediency or profit.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep17/transcript">here</a>. </p><p>Our next episode explores the limits of model land with <a href="https://www.lse.ac.uk/CATS/People/Erica-Thompson">Dr Erica Thompson</a>. Subscribe now so you don’t miss it.</p>]]>
      </content:encoded>
      <pubDate>Wed, 27 Apr 2022 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/32baffc0/71686cc8.mp3" length="38466571" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/OchaJdHM2A9Q6DCsE64cE1ZT0Y-tyENQBtOXE0T1gM0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzg2ODY3NS8x/NjUyMjA4NjkwLWFy/dHdvcmsuanBn.jpg"/>
      <itunes:duration>2400</itunes:duration>
      <itunes:summary>Sheryl Cababa discusses human centric design (HCD), why UX may be too successful and how systems thinking addresses human factors oft overlooked in design thinking.</itunes:summary>
      <itunes:subtitle>Sheryl Cababa discusses human centric design (HCD), why UX may be too successful and how systems thinking addresses human factors oft overlooked in design thinking.</itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/sheryl-cababa" img="https://img.transistorcdn.com/Ihz9D7m65B8z3gE280bq5uNT8D586p7SHL3y_IUuvFU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vYzJhMDBiYzUt/OGExMC00MjljLTg0/OTUtMzAwZjU0YzJm/ZThlLzE2NzMzNzc4/ODMtaW1hZ2UuanBn.jpg">Sheryl Cababa</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/32baffc0/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Humanity at Scale with Kate O’Neill</title>
      <itunes:episode>16</itunes:episode>
      <podcast:episode>16</podcast:episode>
      <itunes:title>Humanity at Scale with Kate O’Neill</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">24157a36-3c4d-4d6b-a481-fe6e43ed80bd</guid>
      <link>https://share.transistor.fm/s/3381d64f</link>
      <description>
        <![CDATA[<p><a href="https://www.koinsights.com/about/about-kate-oneill/">Kate O’Neill</a> is an executive strategist, the Founder and CEO of KO Insights, and author dedicated to improving the human experience at scale.  </p><p>In this paradigm-shifting discussion, Kate traces her roots from a childhood thinking heady thoughts about language and meaning to her current mission as ‘The Tech Humanist’. Following this thread, Kate illustrates why meaning is the core of what makes us human. She urges us to champion meaningful innovation and reject the notion that we are victims of a predetermined future.</p><p>Challenging simplistic analysis, Kate advocates for applying multiple lenses to every situation: the individual and the collective, uses and abuses, insight and foresight, wild success and abject failure. Kimberly and Kate acknowledge but emphatically disavow current norms that reject nuanced discourse or conflate it with ‘both-side-ism’. Emphasizing that everything is connected, Kate shows how to close the gap between human-centricity and business goals. She provides a concrete example of how innovation and impact depend on identifying what is going to matter, not just what matters now. Ending on a strategically optimistic note, Kate urges us to anchor on human values and relationships, habituate to change and actively architect our best human experience – now and in the future.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep16/transcript">here</a>.</p><p>Thank you for joining us for Season 2 of Pondering AI. Join us next season as we ponder the ways in which AI continues to elevate and challenge our humanity. Subscribe to Pondering AI now so you don’t miss it.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.koinsights.com/about/about-kate-oneill/">Kate O’Neill</a> is an executive strategist, the Founder and CEO of KO Insights, and author dedicated to improving the human experience at scale.  </p><p>In this paradigm-shifting discussion, Kate traces her roots from a childhood thinking heady thoughts about language and meaning to her current mission as ‘The Tech Humanist’. Following this thread, Kate illustrates why meaning is the core of what makes us human. She urges us to champion meaningful innovation and reject the notion that we are victims of a predetermined future.</p><p>Challenging simplistic analysis, Kate advocates for applying multiple lenses to every situation: the individual and the collective, uses and abuses, insight and foresight, wild success and abject failure. Kimberly and Kate acknowledge but emphatically disavow current norms that reject nuanced discourse or conflate it with ‘both-side-ism’. Emphasizing that everything is connected, Kate shows how to close the gap between human-centricity and business goals. She provides a concrete example of how innovation and impact depend on identifying what is going to matter, not just what matters now. Ending on a strategically optimistic note, Kate urges us to anchor on human values and relationships, habituate to change and actively architect our best human experience – now and in the future.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep16/transcript">here</a>.</p><p>Thank you for joining us for Season 2 of Pondering AI. Join us next season as we ponder the ways in which AI continues to elevate and challenge our humanity. Subscribe to Pondering AI now so you don’t miss it.</p>]]>
      </content:encoded>
      <pubDate>Wed, 15 Dec 2021 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/3381d64f/004d5614.mp3" length="43078344" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>2688</itunes:duration>
      <itunes:summary>Kate O’Neill champions strategic optimism, embraces nuance, rejects false dichotomies, calls for mental clarity and agility, empowers with empathy and anchors human-centric innovation to meaning.  </itunes:summary>
      <itunes:subtitle>Kate O’Neill champions strategic optimism, embraces nuance, rejects false dichotomies, calls for mental clarity and agility, empowers with empathy and anchors human-centric innovation to meaning.  </itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/kate-o-neill" img="https://img.transistorcdn.com/BVC9Il47Y-Wz1p3sR12n9n_V29jHxoUq0GlOMXjwW6k/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMTZkZjVhMTIt/ZjlhYi00NTc3LTkz/NjItYjgzZTRjYTg5/MmNiLzE2NzMzNzk2/MjgtaW1hZ2UuanBn.jpg">Kate O’Neill</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/3381d64f/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Automation, Agency and the Future of Work with Giselle Mota</title>
      <itunes:episode>15</itunes:episode>
      <podcast:episode>15</podcast:episode>
      <itunes:title>Automation, Agency and the Future of Work with Giselle Mota</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f22ca079-fb25-4995-b840-16b6c659f73f</guid>
      <link>https://share.transistor.fm/s/6bf9df5d</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/gmota">Giselle Mota</a> is a Principal Consultant for the Future of Work at ADP where she advices organizations on human agency, diversity and learning in the age of AI.  </p><p>In this energetic discussion, Giselle shares how navigating dyslexia spawned a passion for technology and enabling learning at work. Giselle stresses that human agency and automation are only mutually exclusive when AI is employed with the wrong end in mind. Prioritizing human experience over ‘doing more with less’ Giselle explores the impact – good and bad - of AI systems on humans at work today.</p><p>While ruminating on the future happening now, Giselle puts the onus on organizations to ensure no employee is left behind. From the warehouse floor to HR, the importance of diverse perspectives, rigorous due diligence and critical thinking when deploying AI systems is underscored. Along the way, Kimberly and Giselle dissect what AI algorithms can and cannot reasonably predict. Giselle then defines the leadership mindsets and talent needed to bring AI to work appropriately. With infectious optimism, she imposes a reality check on our innate desire to “just do cool things”. Finally, in a rousing call to action, Giselle makes a robust argument for robust accountability and making ethics endemic to every human endeavor, including AI.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep15/transcript">here</a>.</p><p>Our final episode of Season 2 features <a href="https://www.koinsights.com/about/about-kate-oneill/">Kate O’Neill</a>. A tech humanist and author of ‘A Future so Bright’ Kate will discuss how we can architect the future of AI with strategic optimism. Subscribe to Pondering AI now so you don’t miss it.  </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/gmota">Giselle Mota</a> is a Principal Consultant for the Future of Work at ADP where she advices organizations on human agency, diversity and learning in the age of AI.  </p><p>In this energetic discussion, Giselle shares how navigating dyslexia spawned a passion for technology and enabling learning at work. Giselle stresses that human agency and automation are only mutually exclusive when AI is employed with the wrong end in mind. Prioritizing human experience over ‘doing more with less’ Giselle explores the impact – good and bad - of AI systems on humans at work today.</p><p>While ruminating on the future happening now, Giselle puts the onus on organizations to ensure no employee is left behind. From the warehouse floor to HR, the importance of diverse perspectives, rigorous due diligence and critical thinking when deploying AI systems is underscored. Along the way, Kimberly and Giselle dissect what AI algorithms can and cannot reasonably predict. Giselle then defines the leadership mindsets and talent needed to bring AI to work appropriately. With infectious optimism, she imposes a reality check on our innate desire to “just do cool things”. Finally, in a rousing call to action, Giselle makes a robust argument for robust accountability and making ethics endemic to every human endeavor, including AI.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep15/transcript">here</a>.</p><p>Our final episode of Season 2 features <a href="https://www.koinsights.com/about/about-kate-oneill/">Kate O’Neill</a>. A tech humanist and author of ‘A Future so Bright’ Kate will discuss how we can architect the future of AI with strategic optimism. Subscribe to Pondering AI now so you don’t miss it.  </p>]]>
      </content:encoded>
      <pubDate>Wed, 01 Dec 2021 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/6bf9df5d/ac89253c.mp3" length="40892852" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>2552</itunes:duration>
      <itunes:summary>Giselle Mota advocates for human enablement, tech accountability, courting disruption, admitting mistakes, more thinking by more people, solving real problems and self-help for AI. </itunes:summary>
      <itunes:subtitle>Giselle Mota advocates for human enablement, tech accountability, courting disruption, admitting mistakes, more thinking by more people, solving real problems and self-help for AI. </itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/giselle-mota" img="https://img.transistorcdn.com/V0REStWt4EWPcxAgFpKymLQsD3JZsf4psGmfgr8Jnd0/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vNWVhZTQ4MGUt/ODNkNS00ZmE2LTk5/MTMtMTIxOGZmODk4/NTg0LzE2NzMzNzk5/MDQtaW1hZ2UuanBn.jpg">Giselle Mota</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/6bf9df5d/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Growing Up with AI with Baroness Beeban Kidron </title>
      <itunes:episode>14</itunes:episode>
      <podcast:episode>14</podcast:episode>
      <itunes:title>Growing Up with AI with Baroness Beeban Kidron </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2d9a71ee-cd12-40d5-9dfd-75ce9aa8d654</guid>
      <link>https://share.transistor.fm/s/872ad2a0</link>
      <description>
        <![CDATA[<p>Baroness Beeban Kidron is an award-willing filmmaker, a Crossbench Peer in the UK House of Lords and the Founder and Chair of the <a href="https://5rightsfoundation.com/">5Rights Foundation</a>.</p><p>In this eye-opening discussion, Beeban vividly describes how the seed for 5Rights was planted while getting up close and personal with teenagers navigating the physical and digital realms ‘In Real Life’. Beeban sounds a resounding alarm about why treating all humans as equal on the internet is regressive. As well as how existing business models have created a perfect societal storm, especially for children.</p><p>Intertwining the voices of these underserved and underrepresented stakeholders with some shocking facts, Beeban illustrates the true impact of the current digital experiment on young people. In that vein, Kimberly and Beeban examine behaviors we implicitly condone and, in fact, promote in the digital realm that would never pass muster in so-called real life. Speaking to the brilliantly terrifying <a href="https://twisted-toys.com/">Twisted Toys</a> campaign, Beeban shows how storytelling can make these critical yet oft sensitive topics accessible. Finally, Beeban speaks about critical breakthroughs such as the <a href="https://ico.org.uk/for-organisations/guide-to-data-protection/ico-codes-of-practice/age-appropriate-design-a-code-of-practice-for-online-services/">Age-Appropriate Design Code</a>, positive action being taken by digital platforms in response and the long road still ahead.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep14/transcript">here</a>.</p><p>Our next episode features <a href="https://www.linkedin.com/in/gmota">Giselle Mota</a>. Giselle is a Principle Consultant for the Future of Work at ADP where she advices organizations on human agency, diversity and learning in the age of AI. Subscribe to Pondering AI now so you don’t miss it.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Baroness Beeban Kidron is an award-willing filmmaker, a Crossbench Peer in the UK House of Lords and the Founder and Chair of the <a href="https://5rightsfoundation.com/">5Rights Foundation</a>.</p><p>In this eye-opening discussion, Beeban vividly describes how the seed for 5Rights was planted while getting up close and personal with teenagers navigating the physical and digital realms ‘In Real Life’. Beeban sounds a resounding alarm about why treating all humans as equal on the internet is regressive. As well as how existing business models have created a perfect societal storm, especially for children.</p><p>Intertwining the voices of these underserved and underrepresented stakeholders with some shocking facts, Beeban illustrates the true impact of the current digital experiment on young people. In that vein, Kimberly and Beeban examine behaviors we implicitly condone and, in fact, promote in the digital realm that would never pass muster in so-called real life. Speaking to the brilliantly terrifying <a href="https://twisted-toys.com/">Twisted Toys</a> campaign, Beeban shows how storytelling can make these critical yet oft sensitive topics accessible. Finally, Beeban speaks about critical breakthroughs such as the <a href="https://ico.org.uk/for-organisations/guide-to-data-protection/ico-codes-of-practice/age-appropriate-design-a-code-of-practice-for-online-services/">Age-Appropriate Design Code</a>, positive action being taken by digital platforms in response and the long road still ahead.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep14/transcript">here</a>.</p><p>Our next episode features <a href="https://www.linkedin.com/in/gmota">Giselle Mota</a>. Giselle is a Principle Consultant for the Future of Work at ADP where she advices organizations on human agency, diversity and learning in the age of AI. Subscribe to Pondering AI now so you don’t miss it.</p>]]>
      </content:encoded>
      <pubDate>Wed, 17 Nov 2021 05:00:00 -0500</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/872ad2a0/961d3778.mp3" length="42624000" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>2660</itunes:duration>
      <itunes:summary>Baroness Beeban Kidron shows tech isn’t exempt from society’s rules, children aren’t de-facto adults and that data protection, child-centered design and digital rights enable the digital world children deserve. </itunes:summary>
      <itunes:subtitle>Baroness Beeban Kidron shows tech isn’t exempt from society’s rules, children aren’t de-facto adults and that data protection, child-centered design and digital rights enable the digital world children deserve. </itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/baroness-kidron" img="https://img.transistorcdn.com/HzSTv-_XpoQS5m3B4csOIS0qNIHLY9bP2gCb6QEjUw0/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vNTMwMjMyNmIt/MGM4Yi00NTUwLWEw/MmItOTM1ZWFiYjIw/Y2YwLzE2NzMzNzk4/ODgtaW1hZ2UuanBn.jpg">Baroness Kidron</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/872ad2a0/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Is AI-Driven Sustainability Sustainable with Vincent de Montalivet</title>
      <itunes:episode>13</itunes:episode>
      <podcast:episode>13</podcast:episode>
      <itunes:title>Is AI-Driven Sustainability Sustainable with Vincent de Montalivet</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d46c4130-909a-48a8-a055-9c1a553d6924</guid>
      <link>https://share.transistor.fm/s/ecdc7c5a</link>
      <description>
        <![CDATA[<p><a href="https://fr.linkedin.com/in/vincentdemontalivet">Vincent de Montalivet</a> is the Global AI Sustainability Leader at Capgemini where he develops strategies to use AI to combat climate change and drive corporate net-zero initiatives.</p><p>In this forthright discussion, Vincent charts his path from supply chain engineering to his current position at the crossroads of data, IT and sustainability. Vincent stresses this is the ‘decade of action’ and  highlights cutting edge AI applications enabling the turn from simulation to accountability in real-time. Addressing fears about AI, Vincent shows how it enables rather than replaces human expertise.</p><p>In that vein, Kimberly and Vincent have a frank discussion about whether AI for environmental good balances AI’s own appetite for energy. Vincent examines different aspects of the argument and shares recent research, facts and figures to shed light on the debate. He describes why AI is not a silver bullet, why AI is not always required and emerging research into making AI itself green. Vincent then provides a 3-step roadmap for corporate sustainability initiatives. Discussing emerging innovations, Vincent pragmatically points out that we are only addressing 3% of the green use cases that can be addressed with AI today. He rightfully suggests focusing there.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep13/transcript">here</a>.</p><p>Our next episode features Baroness Beeban Kidron. She is the Founder and Chair of the <a href="https://5rightsfoundation.com/">5Rights Foundation</a> which is leading the fight to protect children’s rights and well-being in the digital realm. Subscribe to Pondering AI now so you don’t miss it. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://fr.linkedin.com/in/vincentdemontalivet">Vincent de Montalivet</a> is the Global AI Sustainability Leader at Capgemini where he develops strategies to use AI to combat climate change and drive corporate net-zero initiatives.</p><p>In this forthright discussion, Vincent charts his path from supply chain engineering to his current position at the crossroads of data, IT and sustainability. Vincent stresses this is the ‘decade of action’ and  highlights cutting edge AI applications enabling the turn from simulation to accountability in real-time. Addressing fears about AI, Vincent shows how it enables rather than replaces human expertise.</p><p>In that vein, Kimberly and Vincent have a frank discussion about whether AI for environmental good balances AI’s own appetite for energy. Vincent examines different aspects of the argument and shares recent research, facts and figures to shed light on the debate. He describes why AI is not a silver bullet, why AI is not always required and emerging research into making AI itself green. Vincent then provides a 3-step roadmap for corporate sustainability initiatives. Discussing emerging innovations, Vincent pragmatically points out that we are only addressing 3% of the green use cases that can be addressed with AI today. He rightfully suggests focusing there.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep13/transcript">here</a>.</p><p>Our next episode features Baroness Beeban Kidron. She is the Founder and Chair of the <a href="https://5rightsfoundation.com/">5Rights Foundation</a> which is leading the fight to protect children’s rights and well-being in the digital realm. Subscribe to Pondering AI now so you don’t miss it. </p>]]>
      </content:encoded>
      <pubDate>Wed, 03 Nov 2021 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/ecdc7c5a/87a8cc3f.mp3" length="31662691" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>1975</itunes:duration>
      <itunes:summary>Vincent de Montalivet discusses using AI to help the planet, enabling sustainable business models, creating green value chains, the debate over green AI, progress to date and the work still to come. </itunes:summary>
      <itunes:subtitle>Vincent de Montalivet discusses using AI to help the planet, enabling sustainable business models, creating green value chains, the debate over green AI, progress to date and the work still to come. </itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/vincent-de-montalivet" img="https://img.transistorcdn.com/v5p1FCBCzyuNbaTKNO6JoxgZKudTZJEHtu6-TUGnu2g/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vYWFiMDZiMDEt/ZjFlOS00ODVhLThi/YWEtOWViMjBkNWQx/YjM0LzE2NzMzNzk4/MTQtaW1hZ2UuanBn.jpg">Vincent de Montalivet</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/ecdc7c5a/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Case for Humanizing Technology with David Ryan Polgar </title>
      <itunes:episode>12</itunes:episode>
      <podcast:episode>12</podcast:episode>
      <itunes:title>The Case for Humanizing Technology with David Ryan Polgar </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">90e095ba-85fc-471f-a83c-6b66295defa7</guid>
      <link>https://share.transistor.fm/s/0838432b</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/davidryanpolgar">David Ryan Polgar</a> is the Founder of All Tech is Human. He is a leading tech ethicist, an advocate for human-centric technology, and advisor on improving social media and crafting a better digital future. </p><p>In this timely discussion, David traces his not-so-unlikely path from practicing law to being a standard bearer for the responsible technology movement. He artfully illustrates the many ways technology is altering the human experience and makes the case for “no application without representation”.   </p><p>Arguing that many of AI’s misguided foibles stem from a lack of imagination, David shows how all paths to responsible AI start with diversity. Kimberly and David debunk the myth of the ethical superhero but agree there may be a need for ethical unicorns. David expounds on the need for expansive education, why non-traditional career paths will become traditional and the benefits of thinking differently. Acknowledging the complex, nuanced problems ahead, David advocates for space to air constructive, critical, and, yes, contrarian points of view. While disavowing 80s sitcoms, David celebrates youth intuition, bemoans the blame game, prioritizes progress over problem statements, and leans into our inevitable mistakes. Finally, David invokes a future in which responsible tech is so in vogue it becomes altogether unremarkable. </p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep12/transcript">here</a>. </p><p>Our next episode features <a href="https://fr.linkedin.com/in/vincentdemontalivet">Vincent de Montalivet</a>, leader of Capgemini’s global AI Sustainability program. Vincent will help us explore the yin and yang of AI’s relationship with the environment. Subscribe now to Pondering AI so you don’t miss it.  </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/davidryanpolgar">David Ryan Polgar</a> is the Founder of All Tech is Human. He is a leading tech ethicist, an advocate for human-centric technology, and advisor on improving social media and crafting a better digital future. </p><p>In this timely discussion, David traces his not-so-unlikely path from practicing law to being a standard bearer for the responsible technology movement. He artfully illustrates the many ways technology is altering the human experience and makes the case for “no application without representation”.   </p><p>Arguing that many of AI’s misguided foibles stem from a lack of imagination, David shows how all paths to responsible AI start with diversity. Kimberly and David debunk the myth of the ethical superhero but agree there may be a need for ethical unicorns. David expounds on the need for expansive education, why non-traditional career paths will become traditional and the benefits of thinking differently. Acknowledging the complex, nuanced problems ahead, David advocates for space to air constructive, critical, and, yes, contrarian points of view. While disavowing 80s sitcoms, David celebrates youth intuition, bemoans the blame game, prioritizes progress over problem statements, and leans into our inevitable mistakes. Finally, David invokes a future in which responsible tech is so in vogue it becomes altogether unremarkable. </p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep12/transcript">here</a>. </p><p>Our next episode features <a href="https://fr.linkedin.com/in/vincentdemontalivet">Vincent de Montalivet</a>, leader of Capgemini’s global AI Sustainability program. Vincent will help us explore the yin and yang of AI’s relationship with the environment. Subscribe now to Pondering AI so you don’t miss it.  </p>]]>
      </content:encoded>
      <pubDate>Wed, 20 Oct 2021 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/0838432b/40196e10.mp3" length="46054268" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>2874</itunes:duration>
      <itunes:summary>David Ryan Polgar makes the case for value-driven technology, an educational renaissance, passionate disagreements, mindfully architecting our future, confident humility and progress over perfection.  </itunes:summary>
      <itunes:subtitle>David Ryan Polgar makes the case for value-driven technology, an educational renaissance, passionate disagreements, mindfully architecting our future, confident humility and progress over perfection.  </itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/david-ryan-polger" img="https://img.transistorcdn.com/Nn8PT9KAJ1ZaB5oB0NzbgDteKYI9zP3g1t4OpMX48JU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMzE0Zjc5MDUt/YjM1ZS00NTkxLTkz/MjEtOWU3MzUwZGEw/ZjhlLzE2NzMzNzk3/NzYtaW1hZ2UuanBn.jpg">David Ryan Polger</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/0838432b/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Your (Personal) Digital Twin with Dr. Valérie Morignat PhD</title>
      <itunes:episode>11</itunes:episode>
      <podcast:episode>11</podcast:episode>
      <itunes:title>Your (Personal) Digital Twin with Dr. Valérie Morignat PhD</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">40f32780-3c3b-4a42-912d-5c3fc47b00dd</guid>
      <link>https://share.transistor.fm/s/b6f6ba16</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/valeriemorignat">Dr. Valérie Morignat PhD</a> is the CEO of Intelligent Story and a leading advisor on the creative economy. She is a true polymath working at the intersection of art, culture, and technology.</p><p>In this perceptive discussion, Valérie illustrates how cultural legacies inform technology and innovation today. Tracing a path from storytelling in caves to modern Sci-Fi she proves that everything new takes (a lot of) time. Far from theoretical, Valérie shows how this philosophical understanding helps business innovators navigate the current AI landscape.</p><p>Discussing the evolution of VR/AR, Valérie highlights the existential quandary created by our increasingly fragmented digital identities. Kimberly and Valérie discuss the pillars of responsible innovation and the amplification challenges AI creates. Valérie shares the power of AI to teach us about ourselves and increase human learning, creativity, and autonomy. Assuming, of course, we don’t encode ancient, spurious classification schemes or aggravate negative behaviors. She also describes our quest for authenticity and flipping the script to search for the real in the virtual.</p><p>Finally, Valérie sketches a roadmap for success including executive education and incremental adoption to create trust and change our embedded mental models.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep11/transcript">here</a>.</p><p>Our next episode features <a href="https://www.linkedin.com/in/davidryanpolgar">David Ryan Polgar</a>, founder of All Tech is Human. David is a leading tech ethicist and responsible technology advocate who is well-known for his work on improving social media.  Subscribe now so you don’t miss it. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/valeriemorignat">Dr. Valérie Morignat PhD</a> is the CEO of Intelligent Story and a leading advisor on the creative economy. She is a true polymath working at the intersection of art, culture, and technology.</p><p>In this perceptive discussion, Valérie illustrates how cultural legacies inform technology and innovation today. Tracing a path from storytelling in caves to modern Sci-Fi she proves that everything new takes (a lot of) time. Far from theoretical, Valérie shows how this philosophical understanding helps business innovators navigate the current AI landscape.</p><p>Discussing the evolution of VR/AR, Valérie highlights the existential quandary created by our increasingly fragmented digital identities. Kimberly and Valérie discuss the pillars of responsible innovation and the amplification challenges AI creates. Valérie shares the power of AI to teach us about ourselves and increase human learning, creativity, and autonomy. Assuming, of course, we don’t encode ancient, spurious classification schemes or aggravate negative behaviors. She also describes our quest for authenticity and flipping the script to search for the real in the virtual.</p><p>Finally, Valérie sketches a roadmap for success including executive education and incremental adoption to create trust and change our embedded mental models.</p><p>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep11/transcript">here</a>.</p><p>Our next episode features <a href="https://www.linkedin.com/in/davidryanpolgar">David Ryan Polgar</a>, founder of All Tech is Human. David is a leading tech ethicist and responsible technology advocate who is well-known for his work on improving social media.  Subscribe now so you don’t miss it. </p>]]>
      </content:encoded>
      <pubDate>Wed, 06 Oct 2021 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/b6f6ba16/143fe9c5.mp3" length="44245758" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>2761</itunes:duration>
      <itunes:summary>Dr. Valérie Morignat PhD ponders the outsized influence of ancient cultures on technology today, AI’s penchant for amplification, how to avoid opening Pandora’s box and why hybridization is the future.</itunes:summary>
      <itunes:subtitle>Dr. Valérie Morignat PhD ponders the outsized influence of ancient cultures on technology today, AI’s penchant for amplification, how to avoid opening Pandora’s box and why hybridization is the future.</itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/dr-valerie-morignat-phd" img="https://img.transistorcdn.com/DxpMGyF9tGPvfeX3kjzjXOFiP5fsdOMk6_FVO5QF_nU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDk0YzkzYzct/NGM4Yi00MmYwLWJk/MDEtMWM0ODk1ZDcw/ODIyLzE2NzMzNzk5/MjItaW1hZ2UuanBn.jpg">Dr. Valérie Morignat PhD</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/b6f6ba16/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Path to Zero Exclusion with Yonah Welker </title>
      <itunes:episode>10</itunes:episode>
      <podcast:episode>10</podcast:episode>
      <itunes:title>The Path to Zero Exclusion with Yonah Welker </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">20b604e1-14f9-4c46-99dc-4fbcd31edb66</guid>
      <link>https://share.transistor.fm/s/b9bfbd7d</link>
      <description>
        <![CDATA[<p><a href="https://ch.linkedin.com/in/welker">Yonah Welker</a> is a technology innovator, influencer, and advocate for diversity and zero exclusion in AI. They are at the forefront of policies and applications for adaptive, assistive, and social AI.  </p><p><br>In this illuminating discussion, Yonah traces their personal journey from isolation to advocacy through technology. They are passionate about the future of AI-enabled education, healthcare, and civics. Yet caution that our current approach to inclusion is not, in fact, inclusive. While evaluating mechanisms for accountability, Yonah shares lessons learned from the European Commission’s diverse approach to technology evaluation.   </p><p><br>Yonah has an expansive view of how AI can “change everything” for those who experience life differently – whether they are autistic, neurodiverse, disabled or dyslexic. Kimberly and Yonah discuss how AI is expanding the borders of the classroom and workplace today. And how these solutions can inadvertently reinforce existing barriers if not mindfully applied. This leads naturally to the need for broad community collaboration and human involvement beyond traditional corporate boundaries. </p><p><br>Yonah highlights our responsibilities as digital citizens and the critical debate over digital ownership. Finally, Yonah emphasizes that we are all, at our core, activists who can influence the trajectory of AI.  </p><p><br>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep10/transcript">here</a>. </p><p><br>Our next episode features <a href="https://www.linkedin.com/in/valeriemorignat">Dr. Valérie Morignat PhD</a>. Valerie is the CEO of Intelligent Story and a leading advisor on the creative economy who works at the intersection of art and AI. Subscribe now so you don’t miss it. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://ch.linkedin.com/in/welker">Yonah Welker</a> is a technology innovator, influencer, and advocate for diversity and zero exclusion in AI. They are at the forefront of policies and applications for adaptive, assistive, and social AI.  </p><p><br>In this illuminating discussion, Yonah traces their personal journey from isolation to advocacy through technology. They are passionate about the future of AI-enabled education, healthcare, and civics. Yet caution that our current approach to inclusion is not, in fact, inclusive. While evaluating mechanisms for accountability, Yonah shares lessons learned from the European Commission’s diverse approach to technology evaluation.   </p><p><br>Yonah has an expansive view of how AI can “change everything” for those who experience life differently – whether they are autistic, neurodiverse, disabled or dyslexic. Kimberly and Yonah discuss how AI is expanding the borders of the classroom and workplace today. And how these solutions can inadvertently reinforce existing barriers if not mindfully applied. This leads naturally to the need for broad community collaboration and human involvement beyond traditional corporate boundaries. </p><p><br>Yonah highlights our responsibilities as digital citizens and the critical debate over digital ownership. Finally, Yonah emphasizes that we are all, at our core, activists who can influence the trajectory of AI.  </p><p><br>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep10/transcript">here</a>. </p><p><br>Our next episode features <a href="https://www.linkedin.com/in/valeriemorignat">Dr. Valérie Morignat PhD</a>. Valerie is the CEO of Intelligent Story and a leading advisor on the creative economy who works at the intersection of art and AI. Subscribe now so you don’t miss it. </p>]]>
      </content:encoded>
      <pubDate>Wed, 22 Sep 2021 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/b9bfbd7d/02ad313c.mp3" length="39394102" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>2458</itunes:duration>
      <itunes:summary>Yonah Welker shares their unique path to technology, exposes the limits of inclusion, shows why digital safety and comfort aren’t synonymous, challenges us to collaborate broadly and embrace our role as digital citizens. </itunes:summary>
      <itunes:subtitle>Yonah Welker shares their unique path to technology, exposes the limits of inclusion, shows why digital safety and comfort aren’t synonymous, challenges us to collaborate broadly and embrace our role as digital citizens. </itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/yonah-welker" img="https://img.transistorcdn.com/o8jD2rdEGk46X0622pzwxupPpCx5B1cvM6wpRYURGUw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vYmQwNzIyMDQt/NWQzZS00OGUxLWEw/NjUtZDQ4YTk2MzMz/MjMwLzE2NzMzNzk4/NDYtaW1hZ2UuanBn.jpg">Yonah Welker</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/b9bfbd7d/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Privacy Paradox with Dr. Eric Perakslis, PhD</title>
      <itunes:episode>9</itunes:episode>
      <podcast:episode>9</podcast:episode>
      <itunes:title>The Privacy Paradox with Dr. Eric Perakslis, PhD</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">edc0b1bb-efc3-45ba-83c7-7ae67ebb0e3c</guid>
      <link>https://share.transistor.fm/s/52e53ca2</link>
      <description>
        <![CDATA[<p><a href="https://dcri.org/eric-perakslis/">Dr. Eric Perakslis</a>, PhD is the Chief Science and Digital Officer at the Duke Clinical Research Institute.  </p><p>In this incisive discussion, Eric exposes the curious nature of healthcare data. He proposes treating data like a digital specimen: one that requires clear consent and protection against misuse. Expanding our view beyond the doctor’s office, Eric shows why adverse effects from data misuse can be much harder to cure than a rash. As well as our innate human tendency to focus on technology’s potential while overlooking patient vulnerabilities. </p><p>While discussing current data protections, Eric lays the foundation for a shift from privacy toward non-discrimination. Along the way, Kimberly and Eric discuss the many ways anonymous data can compromise patient privacy and the research it underpins. In doing so, a critical loophole in existing institutional review boards (IRB) and regulatory safeguards is exposed. An avid data advocate, Eric adroitly argues that proper patient and data protection will accelerate innovation and life-saving research. Finally, Eric makes a case for doing the hard things first and why the greatest research opportunities are rooted in equity.  </p><p><br>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep9/transcript">here</a>. </p><p><br>Our next episode features <a href="https://ch.linkedin.com/in/welker">Yonah Welker</a>. They are a ‘tech explorer’ and leading voice regarding the need for diversity and zero exclusion in AI as well as the role of social AI. Subscribe now so you don’t miss it.  </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://dcri.org/eric-perakslis/">Dr. Eric Perakslis</a>, PhD is the Chief Science and Digital Officer at the Duke Clinical Research Institute.  </p><p>In this incisive discussion, Eric exposes the curious nature of healthcare data. He proposes treating data like a digital specimen: one that requires clear consent and protection against misuse. Expanding our view beyond the doctor’s office, Eric shows why adverse effects from data misuse can be much harder to cure than a rash. As well as our innate human tendency to focus on technology’s potential while overlooking patient vulnerabilities. </p><p>While discussing current data protections, Eric lays the foundation for a shift from privacy toward non-discrimination. Along the way, Kimberly and Eric discuss the many ways anonymous data can compromise patient privacy and the research it underpins. In doing so, a critical loophole in existing institutional review boards (IRB) and regulatory safeguards is exposed. An avid data advocate, Eric adroitly argues that proper patient and data protection will accelerate innovation and life-saving research. Finally, Eric makes a case for doing the hard things first and why the greatest research opportunities are rooted in equity.  </p><p><br>A transcript of this episode can be found <a href="https://pondering-ai.transistor.fm/episodes/ep9/transcript">here</a>. </p><p><br>Our next episode features <a href="https://ch.linkedin.com/in/welker">Yonah Welker</a>. They are a ‘tech explorer’ and leading voice regarding the need for diversity and zero exclusion in AI as well as the role of social AI. Subscribe now so you don’t miss it.  </p>]]>
      </content:encoded>
      <pubDate>Wed, 08 Sep 2021 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/52e53ca2/41d5d828.mp3" length="30451831" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>1899</itunes:duration>
      <itunes:summary>Dr. Eric Perakslis ponders digital data specimens, the limits of anonymous data, the ongoing debate over eggs, the value of non-discrimination over privacy, and why equity is the next medical frontier. </itunes:summary>
      <itunes:subtitle>Dr. Eric Perakslis ponders digital data specimens, the limits of anonymous data, the ongoing debate over eggs, the value of non-discrimination over privacy, and why equity is the next medical frontier. </itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/eric-perakslis" img="https://img.transistorcdn.com/nGr4VuSjq6LU18a-vQw860bVBsx5OpSNZXNE4pP96C0/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vZGUzMTZlZmMt/YjVjNy00YmZkLWJh/MTEtODVlODM3Njhj/MjU4LzE2NzMzNzk3/OTItaW1hZ2UuanBn.jpg">Eric Perakslis</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/52e53ca2/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>AI Principles in Practice with Ansgar Koene</title>
      <itunes:episode>8</itunes:episode>
      <podcast:episode>8</podcast:episode>
      <itunes:title>AI Principles in Practice with Ansgar Koene</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a48bc376-fa3a-4d80-be78-3adda42f84c4</guid>
      <link>https://share.transistor.fm/s/e86355bb</link>
      <description>
        <![CDATA[<p><a href="https://uk.linkedin.com/in/akoene">Dr. Ansgar Koene</a> is the Global AI Ethics and Regulatory Leader at Ernst &amp; Young (EY), a Sr. Research Fellow at the University of Nottingham and chair of the IEEE P7003 Standard for Algorithm Bias Considerations working group.  </p><p>In this visionary discussion, Ansgar traces his path from robotics and computational social science to the ethics of data sharing and AI. Drawing from his wide-ranging research, Ansgar illustrates the need for true stakeholder representation; what diversity looks like in practice; and why context, critical thinking and common sense are required in AI. </p><p>Describing some of the more subtle yet most impactful dilemmas in AI, Ansgar highlights the natural tension between developing foresight to avoid harms whilst reacting to harms that have already occurred. Ansgar and Kimberly discuss emerging regulations and the link between power and accountability in AI. Ansgar advocates for broad AI literacy but cautions against setting citizens and users up with unrealistic expectations. Lastly, Ansgar muses about the future and why the biggest challenges created by AI might not be obvious today. </p><p>A full transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e8.pdf">here</a>.</p><p>Thank you for joining us for Season 1 of Pondering AI. Join us next season as we ponder the ways in which AI continues to elevate and challenge our humanity. Subscribe to Pondering AI now so you don’t miss it.  </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://uk.linkedin.com/in/akoene">Dr. Ansgar Koene</a> is the Global AI Ethics and Regulatory Leader at Ernst &amp; Young (EY), a Sr. Research Fellow at the University of Nottingham and chair of the IEEE P7003 Standard for Algorithm Bias Considerations working group.  </p><p>In this visionary discussion, Ansgar traces his path from robotics and computational social science to the ethics of data sharing and AI. Drawing from his wide-ranging research, Ansgar illustrates the need for true stakeholder representation; what diversity looks like in practice; and why context, critical thinking and common sense are required in AI. </p><p>Describing some of the more subtle yet most impactful dilemmas in AI, Ansgar highlights the natural tension between developing foresight to avoid harms whilst reacting to harms that have already occurred. Ansgar and Kimberly discuss emerging regulations and the link between power and accountability in AI. Ansgar advocates for broad AI literacy but cautions against setting citizens and users up with unrealistic expectations. Lastly, Ansgar muses about the future and why the biggest challenges created by AI might not be obvious today. </p><p>A full transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e8.pdf">here</a>.</p><p>Thank you for joining us for Season 1 of Pondering AI. Join us next season as we ponder the ways in which AI continues to elevate and challenge our humanity. Subscribe to Pondering AI now so you don’t miss it.  </p>]]>
      </content:encoded>
      <pubDate>Wed, 07 Jul 2021 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/e86355bb/a7b15664.mp3" length="38202933" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>2355</itunes:duration>
      <itunes:summary>Ansgar Koene links AI ethics to his early work in robotics, discusses the interplay between online and offline behaviors and makes the case for foresight, adult accountability, and regulation in AI. </itunes:summary>
      <itunes:subtitle>Ansgar Koene links AI ethics to his early work in robotics, discusses the interplay between online and offline behaviors and makes the case for foresight, adult accountability, and regulation in AI. </itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/ansgar-koene" img="https://img.transistorcdn.com/W2mvy04PdF8xdruq8jw4XJ783WcQrSr_oe6RiGh07Ls/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vYjc4YmFkM2Ut/YzY0Zi00ZDk3LTgy/NDItYTM3ZGQzMDkz/MTI2LzE2NzMzODAx/MTItaW1hZ2UuanBn.jpg">Ansgar Koene</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/e86355bb/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>AI: Competitor or Collaborator with Lama Nachman </title>
      <itunes:episode>7</itunes:episode>
      <podcast:episode>7</podcast:episode>
      <itunes:title>AI: Competitor or Collaborator with Lama Nachman </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3359efe9-4ffc-4b37-927f-b7a613dc9efa</guid>
      <link>https://share.transistor.fm/s/951b7935</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/lama-nachman-3107263/">Lama Nachman</a> is an Intel fellow and the director of Intel’s Human &amp; AI Systems Research Lab. She also led Intel’s Responsible AI program. Lama’s team researches how AI can be applied to deliver contextually appropriate experiences that increase accessibility and amplify human potential.  </p><p><br>In this inspirational discussion, Lama exposes the need for equity in AI, demonstrates the difficulty in empowering authentic human interaction, and why ‘Wizard of Oz’ approaches as well as a willingness to go back to the drawing board are critical. </p><p><br>Through the lens of her work in early childhood education to manufacturing and assistive technologies, Lama deftly illustrates the ethical dilemmas that arise with any AI application - no matter how well-meaning. Kimberly and Lama discuss why perfectionism in the enemy of progress and the need to design for uncertainty in AI. Speaking to her quest to give people suffering from ALS back their voice, Lama stresses how designing for authenticity over expediency is critical to unlock the human experience.  </p><p><br>While pondering the many ethical conundrums that keep her up at night, Lama shows how an expansive, multi-disciplinary approach is critical to mitigate harm. Any why cooperation between humans and AI maximizes the potential of both.  </p><p><br>A full transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e7.pdf">here</a>. </p><p><br>Our final episode this season features <a href="https://uk.linkedin.com/in/akoene">Dr. Ansgar Koene</a>. Ansgar is the Global AI Ethics and Regulatory Leader at EY and a Sr. Research Fellow who specializes in social media, data ethics and AI regulation. Subscribe now to Pondering AI so you don’t miss him. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/lama-nachman-3107263/">Lama Nachman</a> is an Intel fellow and the director of Intel’s Human &amp; AI Systems Research Lab. She also led Intel’s Responsible AI program. Lama’s team researches how AI can be applied to deliver contextually appropriate experiences that increase accessibility and amplify human potential.  </p><p><br>In this inspirational discussion, Lama exposes the need for equity in AI, demonstrates the difficulty in empowering authentic human interaction, and why ‘Wizard of Oz’ approaches as well as a willingness to go back to the drawing board are critical. </p><p><br>Through the lens of her work in early childhood education to manufacturing and assistive technologies, Lama deftly illustrates the ethical dilemmas that arise with any AI application - no matter how well-meaning. Kimberly and Lama discuss why perfectionism in the enemy of progress and the need to design for uncertainty in AI. Speaking to her quest to give people suffering from ALS back their voice, Lama stresses how designing for authenticity over expediency is critical to unlock the human experience.  </p><p><br>While pondering the many ethical conundrums that keep her up at night, Lama shows how an expansive, multi-disciplinary approach is critical to mitigate harm. Any why cooperation between humans and AI maximizes the potential of both.  </p><p><br>A full transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e7.pdf">here</a>. </p><p><br>Our final episode this season features <a href="https://uk.linkedin.com/in/akoene">Dr. Ansgar Koene</a>. Ansgar is the Global AI Ethics and Regulatory Leader at EY and a Sr. Research Fellow who specializes in social media, data ethics and AI regulation. Subscribe now to Pondering AI so you don’t miss him. </p>]]>
      </content:encoded>
      <pubDate>Wed, 23 Jun 2021 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/951b7935/d70e0742.mp3" length="37329610" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>2328</itunes:duration>
      <itunes:summary>Lama Nachman discusses frustration as a motivator, designing for authenticity, embracing uncertainty, clarity of purpose and why nothing is obvious in AI - even when giving people back their voice. </itunes:summary>
      <itunes:subtitle>Lama Nachman discusses frustration as a motivator, designing for authenticity, embracing uncertainty, clarity of purpose and why nothing is obvious in AI - even when giving people back their voice. </itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/lama-nachman" img="https://img.transistorcdn.com/O5WOogKz7Rmq1Z5ZkhoOnquyj1lU4H9VKOk4T0vb40s/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vOTQ2NjllYTUt/YjBlYi00YmRkLThk/MjQtMjg0ZGQ5OWEw/MWY4LzE2NzMzODAw/OTYtaW1hZ2UuanBn.jpg">Lama Nachman</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/951b7935/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Beyond Bias in AI with Shalini Kantayya</title>
      <itunes:episode>6</itunes:episode>
      <podcast:episode>6</podcast:episode>
      <itunes:title>Beyond Bias in AI with Shalini Kantayya</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">46420052-33b7-4984-b2f7-44d5b14d896c</guid>
      <link>https://share.transistor.fm/s/2a326219</link>
      <description>
        <![CDATA[<p><a href="https://www.shalinikantayya.net/">Shalini Kantayya</a> is a storyteller, social activist, and filmmaker who explores challenging social topics with empathy and humor. Shalini’s film <a href="https://www.netflix.com/title/81328723">Coded Bias</a> debunks the myth that AI algorithms are objective by nature. </p><p>In this thought-provoking discussion, Shalini illustrates why film is a powerful medium for social change (hint: it’s about empathy), shares her belief that humans – not machines – must reinvent the future, and shows how inclusion and a focus on the human experience are critical to get AI right.  </p><p><br>Shalini artfully traces the inspiration for Coded Bias and the danger in ceding human autonomy to <em>any</em> unintelligent system. Kimberly and Shalini discuss why good intent and a sole focus on fairness and bias are not enough when considering AI’s future. Highlighting the work of researchers such as <a href="https://www.linkedin.com/in/timnit-gebru-7b3b407/">Dr. Timnit Gebru</a> and <a href="https://www.linkedin.com/in/buolamwini">Joy Buolamwini</a>, Shalini makes the case for inclusion in AI and shares a proven recipe for moving the dial on ethical AI. Finally, Shalini speaks to the need for empathy in all things – including toward our innate human propensity for bias. And how storytelling keeps the human experience front-and-center, allowing us to cross boundaries and open hearts and minds to a different point of view.   </p><p><br>A full transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e6.pdf">here</a>. </p><p><br>Our next episode features <a href="https://www.linkedin.com/in/lama-nachman-3107263/">Lama Nachman</a>. Lama leads Intel’s Human &amp; AI Systems Research Lab where she directs some of the most impactful work - such as giving people back their voice - in applied AI today. Subscribe now to Pondering AI so you don’t miss her. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.shalinikantayya.net/">Shalini Kantayya</a> is a storyteller, social activist, and filmmaker who explores challenging social topics with empathy and humor. Shalini’s film <a href="https://www.netflix.com/title/81328723">Coded Bias</a> debunks the myth that AI algorithms are objective by nature. </p><p>In this thought-provoking discussion, Shalini illustrates why film is a powerful medium for social change (hint: it’s about empathy), shares her belief that humans – not machines – must reinvent the future, and shows how inclusion and a focus on the human experience are critical to get AI right.  </p><p><br>Shalini artfully traces the inspiration for Coded Bias and the danger in ceding human autonomy to <em>any</em> unintelligent system. Kimberly and Shalini discuss why good intent and a sole focus on fairness and bias are not enough when considering AI’s future. Highlighting the work of researchers such as <a href="https://www.linkedin.com/in/timnit-gebru-7b3b407/">Dr. Timnit Gebru</a> and <a href="https://www.linkedin.com/in/buolamwini">Joy Buolamwini</a>, Shalini makes the case for inclusion in AI and shares a proven recipe for moving the dial on ethical AI. Finally, Shalini speaks to the need for empathy in all things – including toward our innate human propensity for bias. And how storytelling keeps the human experience front-and-center, allowing us to cross boundaries and open hearts and minds to a different point of view.   </p><p><br>A full transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e6.pdf">here</a>. </p><p><br>Our next episode features <a href="https://www.linkedin.com/in/lama-nachman-3107263/">Lama Nachman</a>. Lama leads Intel’s Human &amp; AI Systems Research Lab where she directs some of the most impactful work - such as giving people back their voice - in applied AI today. Subscribe now to Pondering AI so you don’t miss her. </p>]]>
      </content:encoded>
      <pubDate>Wed, 09 Jun 2021 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/2a326219/1adf0ca9.mp3" length="31048219" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>1936</itunes:duration>
      <itunes:summary>Shalini Kantayya shares her journey to AI advocacy, documents the invisible hand of AI today, how storytelling and empathy enable difficult conversations, and why everyday people are key to transformative social change.</itunes:summary>
      <itunes:subtitle>Shalini Kantayya shares her journey to AI advocacy, documents the invisible hand of AI today, how storytelling and empathy enable difficult conversations, and why everyday people are key to transformative social change.</itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://www.shalinikantayya.net/" img="https://img.transistorcdn.com/2aOkTikZPwXByiTG_4ht4PzDE_3GuCHobTdGKiovxVA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMjRiMzZjNjkt/MzlhOS00NDEwLTli/M2MtMjNmMTkwNzk0/ZmEzLzE2NzMzODAw/NzUtaW1hZ2UuanBn.jpg">Shalini Kantayya</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/2a326219/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>AI Education for All with Teemu Roos</title>
      <itunes:episode>5</itunes:episode>
      <podcast:episode>5</podcast:episode>
      <itunes:title>AI Education for All with Teemu Roos</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b8159cb1-9c5b-4f8e-87ec-b42e1c7524be</guid>
      <link>https://share.transistor.fm/s/d3bdd730</link>
      <description>
        <![CDATA[<p><a href="https://fi.linkedin.com/in/teemu-roos">Teemu Roos</a> is the lead instructor of the Elements of AI online course which has a pivotal role in Finland's unique, inclusive AI strategy. Teemu is also a Professor of Computer Science at the <a href="https://www.cs.helsinki.fi/u/ttonteri/">University of Helsinki</a> and leader of the AI Education programme at the <a href="https://sasoffice365-my.sharepoint.com/personal/kimberly_nevala_sas_com/Documents/Documents/PonderingAI%20Podcast/Episode%20Notes/Finnish%20Center%20for%20AI">Finnish Center for AI</a>.</p><p>In this encouraging discussion, Teemu shares how an insatiable appetite for discovery led to a career as a ML researcher and educator. His excitement about projects ranging from astrophysics to neonatal brain development highlight AI’s endless potential and the importance of imagination and curiosity.  </p><p>Teemu deftly explains why homogeneity makes doing good AI hard. He enthusiastically demonstrates how collaboration between data scientists, experts and laypersons exposes otherwise hidden opportunities. Kimberly and Teemu discuss the need for broad citizen engagement in AI and why the target audience for Elements of AI is “everyone who <em>isn’t</em> interested in AI”. And why we must focus on ethics and privacy now. With humor and optimism, Teemu helps us envision a future where everyone is informed, passionate and actively engaged in AI. </p><p>A full transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e5.pdf">here</a>.</p><p>Our next episode features <a href="https://www.shalinikantayya.net/">Shalini Kantayya</a>. Shalini is a filmmaker, activist, and self-proclaimed sci-fi fanatic. Her documentary <em>Coded Bias</em> exposes the biases and inequalities that can lurk within AI algorithms. Subscribe to Pondering AI now so you don’t miss her. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://fi.linkedin.com/in/teemu-roos">Teemu Roos</a> is the lead instructor of the Elements of AI online course which has a pivotal role in Finland's unique, inclusive AI strategy. Teemu is also a Professor of Computer Science at the <a href="https://www.cs.helsinki.fi/u/ttonteri/">University of Helsinki</a> and leader of the AI Education programme at the <a href="https://sasoffice365-my.sharepoint.com/personal/kimberly_nevala_sas_com/Documents/Documents/PonderingAI%20Podcast/Episode%20Notes/Finnish%20Center%20for%20AI">Finnish Center for AI</a>.</p><p>In this encouraging discussion, Teemu shares how an insatiable appetite for discovery led to a career as a ML researcher and educator. His excitement about projects ranging from astrophysics to neonatal brain development highlight AI’s endless potential and the importance of imagination and curiosity.  </p><p>Teemu deftly explains why homogeneity makes doing good AI hard. He enthusiastically demonstrates how collaboration between data scientists, experts and laypersons exposes otherwise hidden opportunities. Kimberly and Teemu discuss the need for broad citizen engagement in AI and why the target audience for Elements of AI is “everyone who <em>isn’t</em> interested in AI”. And why we must focus on ethics and privacy now. With humor and optimism, Teemu helps us envision a future where everyone is informed, passionate and actively engaged in AI. </p><p>A full transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e5.pdf">here</a>.</p><p>Our next episode features <a href="https://www.shalinikantayya.net/">Shalini Kantayya</a>. Shalini is a filmmaker, activist, and self-proclaimed sci-fi fanatic. Her documentary <em>Coded Bias</em> exposes the biases and inequalities that can lurk within AI algorithms. Subscribe to Pondering AI now so you don’t miss her. </p>]]>
      </content:encoded>
      <pubDate>Wed, 26 May 2021 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/d3bdd730/03ffb481.mp3" length="28249111" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>1761</itunes:duration>
      <itunes:summary>Teemu Roos exposes AI’s unlimited potential, the opportunities that come from collaborating with experts and laypersons alike, the need for pervasive literacy and his mission to engage everyone (yes, everyone) in AI.</itunes:summary>
      <itunes:subtitle>Teemu Roos exposes AI’s unlimited potential, the opportunities that come from collaborating with experts and laypersons alike, the need for pervasive literacy and his mission to engage everyone (yes, everyone) in AI.</itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/teemu-roos" img="https://img.transistorcdn.com/rDpNAJXy2Uw9xWpjh7av29gYV22nENu4cVcHXiiRSFw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMjg4OTc0NTEt/ODYxNS00Yzc5LWJk/MjEtMTA0MmJmN2Q1/YjEyLzE2NzMzODAw/NTAtaW1hZ2UuanBn.jpg">Teemu Roos</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/d3bdd730/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>An Outlook on AI Ethics with Beena Ammanath</title>
      <itunes:episode>4</itunes:episode>
      <podcast:episode>4</podcast:episode>
      <itunes:title>An Outlook on AI Ethics with Beena Ammanath</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">29b16233-18df-446d-8faf-af18f6c10183</guid>
      <link>https://share.transistor.fm/s/0f9266cf</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/bammanath">Beena Ammanath</a> is the Executive Director of <a href="https://www2.deloitte.com/us/en/pages/deloitte-analytics/articles/advancing-human-ai-collaboration.html?">Deloitte’s AI Institute</a> and leads their Trustworthy AI practice. She is a seasoned executive with global cross-industry experience and has been a board member and advisor to numerous tech startups. Beena is also the founder of the non-profit <a href="https://www.linkedin.com/company/humansforai">Humans For AI</a>.</p><p>In this insightful discussion, Beena traces AI ethics from click-bait to operational reality. She explores the interplay between R&amp;D, value creation and ethics and why expecting – and adapting to - the unexpected is key to trustworthy AI. </p><p>Using practical examples, Beena illustrates why AI ethics go beyond fairness and bias and why principles do, in fact, matter. Kimberly and Beena discuss how AI challenges traditional views of privacy and how companies can make ethics real. Beena provides guidance on leveraging <a href="https://www2.deloitte.com/us/en/pages/deloitte-analytics/solutions/ethics-of-ai-framework.html?">ethical frameworks</a> and why ethical evaluations are not one-size-fits-all or once-and-done. Finally, Beena shares her hope that lessons learned from AI will inform adoption of technologies such as AR/VR and quantum computing.  </p><p><br>A full transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e4.pdf">here</a>. </p><p><br>Our next episode features <a href="https://fi.linkedin.com/in/teemu-roos">Teemu Roos</a>. Teemu is the lead instructor of the Elements of AI online course that has a pivotal role in Finland's unique, inclusive AI strategy, with over 650,000 participants to date. Teemu is also a Professor of Computer Science at the <a href="https://www.cs.helsinki.fi/u/ttonteri/">University of Helsinki</a> and leader of the AI Education programme at the Finnish Center for AI. His research focuses on future applications of machine learning. Subscribe to Pondering AI now so you don’t miss him. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/bammanath">Beena Ammanath</a> is the Executive Director of <a href="https://www2.deloitte.com/us/en/pages/deloitte-analytics/articles/advancing-human-ai-collaboration.html?">Deloitte’s AI Institute</a> and leads their Trustworthy AI practice. She is a seasoned executive with global cross-industry experience and has been a board member and advisor to numerous tech startups. Beena is also the founder of the non-profit <a href="https://www.linkedin.com/company/humansforai">Humans For AI</a>.</p><p>In this insightful discussion, Beena traces AI ethics from click-bait to operational reality. She explores the interplay between R&amp;D, value creation and ethics and why expecting – and adapting to - the unexpected is key to trustworthy AI. </p><p>Using practical examples, Beena illustrates why AI ethics go beyond fairness and bias and why principles do, in fact, matter. Kimberly and Beena discuss how AI challenges traditional views of privacy and how companies can make ethics real. Beena provides guidance on leveraging <a href="https://www2.deloitte.com/us/en/pages/deloitte-analytics/solutions/ethics-of-ai-framework.html?">ethical frameworks</a> and why ethical evaluations are not one-size-fits-all or once-and-done. Finally, Beena shares her hope that lessons learned from AI will inform adoption of technologies such as AR/VR and quantum computing.  </p><p><br>A full transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e4.pdf">here</a>. </p><p><br>Our next episode features <a href="https://fi.linkedin.com/in/teemu-roos">Teemu Roos</a>. Teemu is the lead instructor of the Elements of AI online course that has a pivotal role in Finland's unique, inclusive AI strategy, with over 650,000 participants to date. Teemu is also a Professor of Computer Science at the <a href="https://www.cs.helsinki.fi/u/ttonteri/">University of Helsinki</a> and leader of the AI Education programme at the Finnish Center for AI. His research focuses on future applications of machine learning. Subscribe to Pondering AI now so you don’t miss him. </p>]]>
      </content:encoded>
      <pubDate>Wed, 12 May 2021 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/0f9266cf/67a46aeb.mp3" length="35518693" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>2215</itunes:duration>
      <itunes:summary>Beena Ammanath draws on her extensive experience to expand our view of ethics beyond fairness and bias, highlights the need for adaptability, explains why principles matter and what is required to put ethics to work.</itunes:summary>
      <itunes:subtitle>Beena Ammanath draws on her extensive experience to expand our view of ethics beyond fairness and bias, highlights the need for adaptability, explains why principles matter and what is required to put ethics to work.</itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/beena-ammanath" img="https://img.transistorcdn.com/XgcNbs5b2zo7PJzfPRbFm248AZRzZl_Ek4Gq_eGrHvI/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vZmY5OTIxODUt/Mjg1OS00ZWRmLThi/M2ItOTI4YzQ4ODZm/ZWI1LzE2NzMzODAw/MzItaW1hZ2UuanBn.jpg">Beena Ammanath</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/0f9266cf/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Humanity in AI with Renée Cummings</title>
      <itunes:episode>3</itunes:episode>
      <podcast:episode>3</podcast:episode>
      <itunes:title>Humanity in AI with Renée Cummings</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ae4ead5d-b6b7-405b-8479-ef9784a2e73c</guid>
      <link>https://share.transistor.fm/s/ca21e5f9</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/ren%C3%A9ecummings">Renée Cummings</a> is a criminologist, criminal psychologist, AI ethics evangelist and data activist in residence at the University of Virginia.  </p><p>In this compelling discussion, Renée shares her journey from journalism to the judiciary and into AI. She articulates the power of perspective, why intersectionality and imagination are key to AI’s future, and the extraordinary good we can accomplish with AI in all domains - including policing. If, that is, we vigilantly guard against creating a future modeled only on the past.  </p><p><br>Renée is comfortable being uncomfortable and believes this is vital when developing AI systems. Kimberly and Renée discuss the need for balance in solving the thorniest AI dilemmas. Technology or thinking? Risk- or right-based assessment? Debiasing data or the mind? Social sciences or STEM? Renée broadens our understanding of why diverse tactics produce better AI. And why authenticity and the courage to admit when we get it wrong (because we will) will create an AI legacy we can all be proud of. </p><p><br>A full transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e3.pdf">here</a>.  </p><p><br>Our next episode will feature <a href="https://www.linkedin.com/in/bammanath">Beena Ammanath</a>, Executive Director of Deloitte’s Global AI Institute and founder of the non-profit Humans for AI. Subscribe to Pondering AI now so you don’t miss it. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/ren%C3%A9ecummings">Renée Cummings</a> is a criminologist, criminal psychologist, AI ethics evangelist and data activist in residence at the University of Virginia.  </p><p>In this compelling discussion, Renée shares her journey from journalism to the judiciary and into AI. She articulates the power of perspective, why intersectionality and imagination are key to AI’s future, and the extraordinary good we can accomplish with AI in all domains - including policing. If, that is, we vigilantly guard against creating a future modeled only on the past.  </p><p><br>Renée is comfortable being uncomfortable and believes this is vital when developing AI systems. Kimberly and Renée discuss the need for balance in solving the thorniest AI dilemmas. Technology or thinking? Risk- or right-based assessment? Debiasing data or the mind? Social sciences or STEM? Renée broadens our understanding of why diverse tactics produce better AI. And why authenticity and the courage to admit when we get it wrong (because we will) will create an AI legacy we can all be proud of. </p><p><br>A full transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e3.pdf">here</a>.  </p><p><br>Our next episode will feature <a href="https://www.linkedin.com/in/bammanath">Beena Ammanath</a>, Executive Director of Deloitte’s Global AI Institute and founder of the non-profit Humans for AI. Subscribe to Pondering AI now so you don’t miss it. </p>]]>
      </content:encoded>
      <pubDate>Wed, 28 Apr 2021 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/ca21e5f9/94f1f83e.mp3" length="32321297" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>2015</itunes:duration>
      <itunes:summary>Renée Cummings traces her unconventional path to data activism, opens our minds to the power of imagination and authenticity, and makes a cogent case for why being uncomfortable is key to creating a positive AI legacy. </itunes:summary>
      <itunes:subtitle>Renée Cummings traces her unconventional path to data activism, opens our minds to the power of imagination and authenticity, and makes a cogent case for why being uncomfortable is key to creating a positive AI legacy. </itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/renee-cummings" img="https://img.transistorcdn.com/7n2xFT-bXIry2xmc6XEjCUESnaNA73GjfGyC5a6SUXE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vNDFmMGIwYjYt/ZGQzZS00MTY3LWI4/NDktN2RlOGI2ODBk/MzYyLzE2NzMzNzk5/NzgtaW1hZ2UuanBn.jpg">Renée Cummings</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/ca21e5f9/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Diversity, Equity and Inclusion in AI with Tess Posner </title>
      <itunes:episode>2</itunes:episode>
      <podcast:episode>2</podcast:episode>
      <itunes:title>Diversity, Equity and Inclusion in AI with Tess Posner </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">198b0662-f919-42ec-a267-674a2ae35b5d</guid>
      <link>https://share.transistor.fm/s/9a86bf05</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/tessposner">Tess Posner</a> is an educator, social entrepreneur, CEO of <a href="https://ai-4-all.org/">AI-4-All</a> and an avid advocate for diversity, inclusion and equity in the tech economy. </p><p>In this inspiring and insightful discussion, Tess shares her mission to make technology and education accessible to all, inspiring work being done by rising student leaders in the AI-4-All Changemaker community, some eye-opening statistics on the state of diversity in AI, <a href="http://gendershades.org/">research</a> on bias in today’s AI systems, and the importance of not letting cynicism rule the day.</p><p>Tess’s passion is infectious as she explains why AI literacy and education cultivates future leaders, not just future data scientists. Kimberly and Tess talk about the hard but necessary work of creating diverse, inclusive cultures and why the benefits go far beyond positive optics. As well as why viewing technology as a silver bullet is fraught and the importance of unlocking human potential. Finally, Tess identifies tangible actions individuals, organizations, and communities can take today to ensure everyone benefits from AI tomorrow. </p><p>A full transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e2.pdf">here</a>. </p><p>Our next episode features <a href="https://www.linkedin.com/in/ren%C3%A9ecummings">Renée Cummings</a>: a criminologist, criminal psychologist and AI ethics evangelist who is passionate about keeping the human experience at the center of AI. <a href="https://pondering-ai.transistor.fm/subscribe">Subscribe</a> to Pondering AI now so you don’t miss it.  </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/tessposner">Tess Posner</a> is an educator, social entrepreneur, CEO of <a href="https://ai-4-all.org/">AI-4-All</a> and an avid advocate for diversity, inclusion and equity in the tech economy. </p><p>In this inspiring and insightful discussion, Tess shares her mission to make technology and education accessible to all, inspiring work being done by rising student leaders in the AI-4-All Changemaker community, some eye-opening statistics on the state of diversity in AI, <a href="http://gendershades.org/">research</a> on bias in today’s AI systems, and the importance of not letting cynicism rule the day.</p><p>Tess’s passion is infectious as she explains why AI literacy and education cultivates future leaders, not just future data scientists. Kimberly and Tess talk about the hard but necessary work of creating diverse, inclusive cultures and why the benefits go far beyond positive optics. As well as why viewing technology as a silver bullet is fraught and the importance of unlocking human potential. Finally, Tess identifies tangible actions individuals, organizations, and communities can take today to ensure everyone benefits from AI tomorrow. </p><p>A full transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e2.pdf">here</a>. </p><p>Our next episode features <a href="https://www.linkedin.com/in/ren%C3%A9ecummings">Renée Cummings</a>: a criminologist, criminal psychologist and AI ethics evangelist who is passionate about keeping the human experience at the center of AI. <a href="https://pondering-ai.transistor.fm/subscribe">Subscribe</a> to Pondering AI now so you don’t miss it.  </p>]]>
      </content:encoded>
      <pubDate>Wed, 14 Apr 2021 05:01:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/9a86bf05/4901d7ce.mp3" length="27776249" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>1732</itunes:duration>
      <itunes:summary>Tess Posner shares her mission to make technology and education accessible to all, demonstrates why diversity and inclusion drive innovation, and assures us that it is not too late to ensure AI is both created by and beneficial for everyone. </itunes:summary>
      <itunes:subtitle>Tess Posner shares her mission to make technology and education accessible to all, demonstrates why diversity and inclusion drive innovation, and assures us that it is not too late to ensure AI is both created by and beneficial for everyone. </itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/tess-posner" img="https://img.transistorcdn.com/k6__pYXc5n1upGuzHEez-oZjcL8i3fSqM38yfKvSY7U/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vZWFkMGM5OTEt/MWI5Ni00NjVjLWEw/OTgtZWRjOTc0YjQ0/NTkwLzE2NzMzNzk5/NjQtaW1hZ2UuanBn.jpg">Tess Posner</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/9a86bf05/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Power and Peril of AI with Michael Kanaan </title>
      <itunes:episode>1</itunes:episode>
      <podcast:episode>1</podcast:episode>
      <itunes:title>Power and Peril of AI with Michael Kanaan </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">330c2fbc-1848-423a-a532-4c439aa8ffb6</guid>
      <link>https://share.transistor.fm/s/66b4aaf5</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/michaeljkanaan">Michael Kanaan</a> is the author of the best-selling book <a href="https://www.amazon.com/T-Minus-Humanitys-Countdown-Artificial-Intelligence/dp/1948836947/ref=sr_1_1?dchild=1&amp;keywords=T-AI&amp;qid=1617635501&amp;sr=8-1">T-AI</a> and the former chairperson of AI for the U.S. Air Force, Headquarters Pentagon.</p><p>In this far-reaching discussion, Michael provides perspectives on the peril of anthropomorphizing AI and how differentiating between intelligence and consciousness creates clarity. He shares his own reckoning with humility while writing <a href="https://www.amazon.com/T-Minus-Humanitys-Countdown-Artificial-Intelligence/dp/1948836947/ref=sr_1_1?dchild=1&amp;keywords=T-AI&amp;qid=1617635501&amp;sr=8-1">T-AI</a>, popular misconceptions about AI, where we can go awry in addressing – or<em> not</em> addressing – AI’s inherent dualities, pros and cons of the technology’s ready availability, and why unflinching due diligence is critical to deploying AI safely, ethically, and responsibly.</p><p>After a brief diversion into the perils of technology that is too responsive to our whims (ahem, social media), Kimberly and Michael discuss the importance of bridging the digital divide so everyone can contribute to and benefit from AI. Michael also makes the case for how AI may have the greatest impact on subject matter experts and decision makers and why explainability is overrated. And, finally, why AI’s future will be determined not by data scientists but by artists, sociologists, teachers and more.</p><p>A transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e1.pdf">here</a>.</p><p>Our next episode will feature <a href="https://www.linkedin.com/in/tessposner">Tess Posner</a>: an educator, social entrepreneur, and CEO of AI-4-All. <a href="https://pondering-ai.transistor.fm/subscribe">Subscribe</a> to Pondering AI now so you don’t miss it.  </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/michaeljkanaan">Michael Kanaan</a> is the author of the best-selling book <a href="https://www.amazon.com/T-Minus-Humanitys-Countdown-Artificial-Intelligence/dp/1948836947/ref=sr_1_1?dchild=1&amp;keywords=T-AI&amp;qid=1617635501&amp;sr=8-1">T-AI</a> and the former chairperson of AI for the U.S. Air Force, Headquarters Pentagon.</p><p>In this far-reaching discussion, Michael provides perspectives on the peril of anthropomorphizing AI and how differentiating between intelligence and consciousness creates clarity. He shares his own reckoning with humility while writing <a href="https://www.amazon.com/T-Minus-Humanitys-Countdown-Artificial-Intelligence/dp/1948836947/ref=sr_1_1?dchild=1&amp;keywords=T-AI&amp;qid=1617635501&amp;sr=8-1">T-AI</a>, popular misconceptions about AI, where we can go awry in addressing – or<em> not</em> addressing – AI’s inherent dualities, pros and cons of the technology’s ready availability, and why unflinching due diligence is critical to deploying AI safely, ethically, and responsibly.</p><p>After a brief diversion into the perils of technology that is too responsive to our whims (ahem, social media), Kimberly and Michael discuss the importance of bridging the digital divide so everyone can contribute to and benefit from AI. Michael also makes the case for how AI may have the greatest impact on subject matter experts and decision makers and why explainability is overrated. And, finally, why AI’s future will be determined not by data scientists but by artists, sociologists, teachers and more.</p><p>A transcript of this episode can be found <a href="https://www.sas.com/content/dam/SAS/documents/event-collateral/2021/en/podcast-transcripts/ai-podcast-s1e1.pdf">here</a>.</p><p>Our next episode will feature <a href="https://www.linkedin.com/in/tessposner">Tess Posner</a>: an educator, social entrepreneur, and CEO of AI-4-All. <a href="https://pondering-ai.transistor.fm/subscribe">Subscribe</a> to Pondering AI now so you don’t miss it.  </p>]]>
      </content:encoded>
      <pubDate>Wed, 14 Apr 2021 05:00:00 -0400</pubDate>
      <author>Kimberly Nevala, Strategic Advisor - SAS</author>
      <enclosure url="https://media.transistor.fm/66b4aaf5/f7ad627b.mp3" length="43472289" type="audio/mpeg"/>
      <itunes:author>Kimberly Nevala, Strategic Advisor - SAS</itunes:author>
      <itunes:duration>2713</itunes:duration>
      <itunes:summary>Michael Kanaan ponders the danger of anthropomorphizing AI, the importance of humility, popular misconceptions about AI, the need for unflinching due diligence, and why AI’s future will be written by humans from every walk of life.  </itunes:summary>
      <itunes:subtitle>Michael Kanaan ponders the danger of anthropomorphizing AI, the importance of humility, popular misconceptions about AI, the need for unflinching due diligence, and why AI’s future will be written by humans from every walk of life.  </itunes:subtitle>
      <itunes:keywords>AI, artificial intelligence, bias, ethics, responsible AI, RAI, diversity and inclusion, DEI </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://pondering-ai.transistor.fm/people/kimberly-nevala" img="https://img.transistorcdn.com/YhzjFdWo_9BUOwGVEjTvP6FdNOuKRpVuoKxfE36impQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODhmNTcyMWUt/NzEzZi00ZmJiLThk/ZDctMWY2OGUxZTcx/YzdhLzE2NzI3NzMw/MDktaW1hZ2UuanBn.jpg">Kimberly Nevala</podcast:person>
      <podcast:person role="Guest" href="https://pondering-ai.transistor.fm/people/michael-kanaan" img="https://img.transistorcdn.com/NGt4_gNnA0XKZ2l24Ad3i19zhv98_CQliKBIFG3ivT8/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vODM0NDdmYWYt/ZjY1YS00NDdiLTk4/OTMtNDMwOTk3MmY3/Zjk1LzE2NzMzNzk5/NDItaW1hZ2UuanBn.jpg">Michael Kanaan</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/66b4aaf5/transcript.txt" type="text/plain"/>
    </item>
  </channel>
</rss>
