<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/stylesheet.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link rel="self" type="application/rss+xml" href="https://feeds.transistor.fm/computer-says-maybe" title="MP3 Audio"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    <podcast:podping usesPodping="true"/>
    <title>Computer Says Maybe</title>
    <generator>Transistor (https://transistor.fm)</generator>
    <itunes:new-feed-url>https://feeds.transistor.fm/computer-says-maybe</itunes:new-feed-url>
    <description>Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly. </description>
    <copyright>2024</copyright>
    <podcast:guid>0e824ddf-b5c1-506d-8483-52008b87b37b</podcast:guid>
    <podcast:locked owner="pod@saysmaybe.com">no</podcast:locked>
    <itunes:applepodcastsverify>7655ca40-f0eb-11f0-a7ee-41bc8aca8edb</itunes:applepodcastsverify>
    <podcast:trailer pubdate="Fri, 09 Jan 2026 00:05:00 -0100" url="https://media.transistor.fm/de7adda8/cf98c490.mp3" length="3259884" type="audio/mpeg">The Vaporstate: A New Mini-Series</podcast:trailer>
    <podcast:trailer pubdate="Thu, 18 Sep 2025 14:18:48 +0000" url="https://media.transistor.fm/81ca8d5b/2613a4aa.mp3" length="2171017" type="audio/mpeg">Gotcha!</podcast:trailer>
    <language>en-us</language>
    <pubDate>Fri, 03 Apr 2026 01:00:07 +0000</pubDate>
    <lastBuildDate>Fri, 03 Apr 2026 01:01:07 +0000</lastBuildDate>
    <link>https://csm.transistor.fm/</link>
    
    <itunes:category text="Technology"/>
    <itunes:category text="Society &amp; Culture"/>
    <itunes:type>episodic</itunes:type>
    <itunes:author>Alix Dunn</itunes:author>
    <itunes:image href="https://img.transistorcdn.com/x8fj59nTc7xQOK_yVDwKG7lYkXxLk7FhG6UlGPWGJio/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mMmQ1/MmQ3YzNiMDQ0MjA1/ZjYyZGM0YTRlMWZi/N2MxZS5qcGVn.jpg"/>
    <itunes:summary>Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly. </itunes:summary>
    <itunes:subtitle>Technology is changing fast.</itunes:subtitle>
    <itunes:keywords></itunes:keywords>
    <itunes:owner>
      <itunes:name>Alix Dunn</itunes:name>
      <itunes:email>pod@saysmaybe.com</itunes:email>
    </itunes:owner>
    <itunes:complete>No</itunes:complete>
    <itunes:explicit>No</itunes:explicit>
    <item>
      <title>How to Scare a Fascist w/ Naomi Klein</title>
      <itunes:episode>113</itunes:episode>
      <podcast:episode>113</podcast:episode>
      <itunes:title>How to Scare a Fascist w/ Naomi Klein</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">299f8fa0-2ccc-4ede-a601-b0c15ef6e149</guid>
      <link>https://share.transistor.fm/s/3cf9909f</link>
      <description>
        <![CDATA[<p>Naomi Klein has spent her career studying political movements — and she thinks progressives are doing better than we think. Because the fascists are scared.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/to-be-seen-and-not-watched-w-tawana-petty"><strong>To be Seen and not Watched w/ Tawana Petty</strong></a><strong><br></strong><br></p><p>In her forthcoming book, End Times Fascism, Klein and co-author Astra Taylor take stock of the history of fascism and the collective power that has been brought to bear to fight it. This time is different. Tech titans accumulated tremendous power and wealth, and are firmly on the side of the fascists. And our information environment is flooded and disoriented. While that might portend a dark outcome, Klein has a different diagnosis. Fascist powers seem angrier and more aggressive than ever; but Klein thinks this is a sign that we are winning.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.theguardian.com/us-news/ng-interactive/2025/apr/13/end-times-fascism-far-right-trump-musk">The Rise of End Times Fascism by Astra Taylor &amp; Naomi Klein</a></li><li><a href="https://timothysnyder.org/on-tyranny">On Tyranny</a> by Timothy Snyder</li><li>More about Naomi &amp; Astra’s upcoming book <a href="https://naomiklein.org/end-times-fascism/">End Times Fascism and the Fight for the Living World.</a></li><li><a href="https://www.thecut.com/article/brooding-friction-maxxing-new-years-2026-resolution.html">In 2026, We Are Friction-Maxxing</a> by Kathryn Jezzer-Morton, The Cut, Jan 2026</li><li><a href="https://www.penguin.co.uk/books/451795/technofeudalism-by-varoufakis-yanis/9781529926095">Technofeudalism: What Killed Capitalism</a> by Yanis Varoufakis</li><li><a href="https://www.sfu.ca/~andrewf/CONCEPT2.html">Walter Benjamin’s Concept of History</a></li><li><a href="https://www.aljazeera.com/news/2026/3/23/un-expert-says-world-has-given-israel-licence-to-torture-palestinians">UN expert says world has given Israel ‘licence to torture Palestinians’</a> — Al Jazeera quoting Francesca Albanese, March 2026</li><li><a href="https://progressive.org/magazine/how-the-free-helicopter-rides-meme-went-viral-roberts-20230907/">How The 'Free Helicopter Rides' Meme Went Viral</a> — The Progressive Magazine, September 2023</li><li><a href="https://logicmag.io/security/safe-or-just-surveilled-tawana-petty-on-facial-recognition/">Safe or Just Surveilled?: Tawana Petty on the Fight Against Facial Recognition Surveillance</a> — Logic(s) Magazine 2020</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Naomi Klein has spent her career studying political movements — and she thinks progressives are doing better than we think. Because the fascists are scared.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/to-be-seen-and-not-watched-w-tawana-petty"><strong>To be Seen and not Watched w/ Tawana Petty</strong></a><strong><br></strong><br></p><p>In her forthcoming book, End Times Fascism, Klein and co-author Astra Taylor take stock of the history of fascism and the collective power that has been brought to bear to fight it. This time is different. Tech titans accumulated tremendous power and wealth, and are firmly on the side of the fascists. And our information environment is flooded and disoriented. While that might portend a dark outcome, Klein has a different diagnosis. Fascist powers seem angrier and more aggressive than ever; but Klein thinks this is a sign that we are winning.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.theguardian.com/us-news/ng-interactive/2025/apr/13/end-times-fascism-far-right-trump-musk">The Rise of End Times Fascism by Astra Taylor &amp; Naomi Klein</a></li><li><a href="https://timothysnyder.org/on-tyranny">On Tyranny</a> by Timothy Snyder</li><li>More about Naomi &amp; Astra’s upcoming book <a href="https://naomiklein.org/end-times-fascism/">End Times Fascism and the Fight for the Living World.</a></li><li><a href="https://www.thecut.com/article/brooding-friction-maxxing-new-years-2026-resolution.html">In 2026, We Are Friction-Maxxing</a> by Kathryn Jezzer-Morton, The Cut, Jan 2026</li><li><a href="https://www.penguin.co.uk/books/451795/technofeudalism-by-varoufakis-yanis/9781529926095">Technofeudalism: What Killed Capitalism</a> by Yanis Varoufakis</li><li><a href="https://www.sfu.ca/~andrewf/CONCEPT2.html">Walter Benjamin’s Concept of History</a></li><li><a href="https://www.aljazeera.com/news/2026/3/23/un-expert-says-world-has-given-israel-licence-to-torture-palestinians">UN expert says world has given Israel ‘licence to torture Palestinians’</a> — Al Jazeera quoting Francesca Albanese, March 2026</li><li><a href="https://progressive.org/magazine/how-the-free-helicopter-rides-meme-went-viral-roberts-20230907/">How The 'Free Helicopter Rides' Meme Went Viral</a> — The Progressive Magazine, September 2023</li><li><a href="https://logicmag.io/security/safe-or-just-surveilled-tawana-petty-on-facial-recognition/">Safe or Just Surveilled?: Tawana Petty on the Fight Against Facial Recognition Surveillance</a> — Logic(s) Magazine 2020</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </content:encoded>
      <pubDate>Fri, 03 Apr 2026 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/3cf9909f/8481bdf5.mp3" length="64757690" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/O_dRj6KicIFPtuznYpdN9XaiYp9c1RQgZc4w7l5_YDs/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNzU3/MWI2ZjM2ZGJiMzBi/NTYwMTZjMjdlZGYw/MzUyNy5wbmc.jpg"/>
      <itunes:duration>2696</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Naomi Klein has spent her career studying political movements — and she thinks progressives are doing better than we think. Because the fascists are scared.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/to-be-seen-and-not-watched-w-tawana-petty"><strong>To be Seen and not Watched w/ Tawana Petty</strong></a><strong><br></strong><br></p><p>In her forthcoming book, End Times Fascism, Klein and co-author Astra Taylor take stock of the history of fascism and the collective power that has been brought to bear to fight it. This time is different. Tech titans accumulated tremendous power and wealth, and are firmly on the side of the fascists. And our information environment is flooded and disoriented. While that might portend a dark outcome, Klein has a different diagnosis. Fascist powers seem angrier and more aggressive than ever; but Klein thinks this is a sign that we are winning.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.theguardian.com/us-news/ng-interactive/2025/apr/13/end-times-fascism-far-right-trump-musk">The Rise of End Times Fascism by Astra Taylor &amp; Naomi Klein</a></li><li><a href="https://timothysnyder.org/on-tyranny">On Tyranny</a> by Timothy Snyder</li><li>More about Naomi &amp; Astra’s upcoming book <a href="https://naomiklein.org/end-times-fascism/">End Times Fascism and the Fight for the Living World.</a></li><li><a href="https://www.thecut.com/article/brooding-friction-maxxing-new-years-2026-resolution.html">In 2026, We Are Friction-Maxxing</a> by Kathryn Jezzer-Morton, The Cut, Jan 2026</li><li><a href="https://www.penguin.co.uk/books/451795/technofeudalism-by-varoufakis-yanis/9781529926095">Technofeudalism: What Killed Capitalism</a> by Yanis Varoufakis</li><li><a href="https://www.sfu.ca/~andrewf/CONCEPT2.html">Walter Benjamin’s Concept of History</a></li><li><a href="https://www.aljazeera.com/news/2026/3/23/un-expert-says-world-has-given-israel-licence-to-torture-palestinians">UN expert says world has given Israel ‘licence to torture Palestinians’</a> — Al Jazeera quoting Francesca Albanese, March 2026</li><li><a href="https://progressive.org/magazine/how-the-free-helicopter-rides-meme-went-viral-roberts-20230907/">How The 'Free Helicopter Rides' Meme Went Viral</a> — The Progressive Magazine, September 2023</li><li><a href="https://logicmag.io/security/safe-or-just-surveilled-tawana-petty-on-facial-recognition/">Safe or Just Surveilled?: Tawana Petty on the Fight Against Facial Recognition Surveillance</a> — Logic(s) Magazine 2020</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/3cf9909f/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Fantasy Factory: One Filmmaker's Fight Against AI w/ Valerie Veatch</title>
      <itunes:episode>112</itunes:episode>
      <podcast:episode>112</podcast:episode>
      <itunes:title>Fantasy Factory: One Filmmaker's Fight Against AI w/ Valerie Veatch</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6375a819-bd3c-4246-b7e0-4a31acfd4e8a</guid>
      <link>https://share.transistor.fm/s/4733bd4e</link>
      <description>
        <![CDATA[<p>The way artists make art matters. And some artists, like filmmaker Valerie Veatch, are exploring what role AI has in the craft of filmmaking.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/fantasy-factory-ai-supervillains-w-anat-shenker-osorio"><strong>Fantasy Factory: AI Supervillains w/ Anat Shenker-Osorio</strong></a><strong><br></strong><br></p><p>Valerie Veatch is the director of Ghost in the Machine, a new film that explores the depths of the Silicon Valley fantasies around AI, and platforms all the people that challenge these fantasies. With this film, Valerie is working to change the culture of AI: it is not inevitable, in many way it’s not even possible, and therefore we have a right to refuse to engage with it. Valerie discusses why she made the film, what she learned, and what impact she’s hoping it will have.</p><p><strong><em>Ghost in the Machine</em></strong><strong> will be available for rentals and screenings beginning March 27, via </strong><a href="https://kinema.com/films/ghost-in-the-machine-pvxg4p"><strong>Kinema</strong></a><strong>! Pre-sales are now available at open now (go to </strong><a href="https://kinema.com/films/ghost-in-the-machine-pvxg4p"><strong>Kinema</strong></a><strong> and slelect the "Watch" tab). Proceeds will go towards the production of the film. The film will also be available on PBS in fall 2026.<br></strong><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.youtube.com/watch?v=i9DAv0D7tnY">Trailer for Ghost in the Machine</a></li><li><a href="https://bristoluniversitypress.co.uk/resisting-ai">Resisting AI</a> by Dan McQuillan</li><li><a href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots</a> by Emily Bender et al</li><li><a href="https://firstmonday.org/ojs/index.php/fm/article/view/13636/11606">The TESCREAL Bundle</a> by Timnit Gebru and Emile P. Torres</li><li><a href="https://kinema.com/">Kinema</a> — where you can watch Ghost in the Machine</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>The way artists make art matters. And some artists, like filmmaker Valerie Veatch, are exploring what role AI has in the craft of filmmaking.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/fantasy-factory-ai-supervillains-w-anat-shenker-osorio"><strong>Fantasy Factory: AI Supervillains w/ Anat Shenker-Osorio</strong></a><strong><br></strong><br></p><p>Valerie Veatch is the director of Ghost in the Machine, a new film that explores the depths of the Silicon Valley fantasies around AI, and platforms all the people that challenge these fantasies. With this film, Valerie is working to change the culture of AI: it is not inevitable, in many way it’s not even possible, and therefore we have a right to refuse to engage with it. Valerie discusses why she made the film, what she learned, and what impact she’s hoping it will have.</p><p><strong><em>Ghost in the Machine</em></strong><strong> will be available for rentals and screenings beginning March 27, via </strong><a href="https://kinema.com/films/ghost-in-the-machine-pvxg4p"><strong>Kinema</strong></a><strong>! Pre-sales are now available at open now (go to </strong><a href="https://kinema.com/films/ghost-in-the-machine-pvxg4p"><strong>Kinema</strong></a><strong> and slelect the "Watch" tab). Proceeds will go towards the production of the film. The film will also be available on PBS in fall 2026.<br></strong><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.youtube.com/watch?v=i9DAv0D7tnY">Trailer for Ghost in the Machine</a></li><li><a href="https://bristoluniversitypress.co.uk/resisting-ai">Resisting AI</a> by Dan McQuillan</li><li><a href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots</a> by Emily Bender et al</li><li><a href="https://firstmonday.org/ojs/index.php/fm/article/view/13636/11606">The TESCREAL Bundle</a> by Timnit Gebru and Emile P. Torres</li><li><a href="https://kinema.com/">Kinema</a> — where you can watch Ghost in the Machine</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </content:encoded>
      <pubDate>Fri, 27 Mar 2026 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/4733bd4e/787436ef.mp3" length="71676176" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/OUpAl5KiPIeWrkOaJYwadsP_Ah2nTJ4CzOX18QhTtxw/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTI4/YTQwZDM5N2ViYTAy/NTBhZTZkNDI5MDNk/OWFiNi5wbmc.jpg"/>
      <itunes:duration>2984</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>The way artists make art matters. And some artists, like filmmaker Valerie Veatch, are exploring what role AI has in the craft of filmmaking.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/fantasy-factory-ai-supervillains-w-anat-shenker-osorio"><strong>Fantasy Factory: AI Supervillains w/ Anat Shenker-Osorio</strong></a><strong><br></strong><br></p><p>Valerie Veatch is the director of Ghost in the Machine, a new film that explores the depths of the Silicon Valley fantasies around AI, and platforms all the people that challenge these fantasies. With this film, Valerie is working to change the culture of AI: it is not inevitable, in many way it’s not even possible, and therefore we have a right to refuse to engage with it. Valerie discusses why she made the film, what she learned, and what impact she’s hoping it will have.</p><p><strong><em>Ghost in the Machine</em></strong><strong> will be available for rentals and screenings beginning March 27, via </strong><a href="https://kinema.com/films/ghost-in-the-machine-pvxg4p"><strong>Kinema</strong></a><strong>! Pre-sales are now available at open now (go to </strong><a href="https://kinema.com/films/ghost-in-the-machine-pvxg4p"><strong>Kinema</strong></a><strong> and slelect the "Watch" tab). Proceeds will go towards the production of the film. The film will also be available on PBS in fall 2026.<br></strong><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.youtube.com/watch?v=i9DAv0D7tnY">Trailer for Ghost in the Machine</a></li><li><a href="https://bristoluniversitypress.co.uk/resisting-ai">Resisting AI</a> by Dan McQuillan</li><li><a href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots</a> by Emily Bender et al</li><li><a href="https://firstmonday.org/ojs/index.php/fm/article/view/13636/11606">The TESCREAL Bundle</a> by Timnit Gebru and Emile P. Torres</li><li><a href="https://kinema.com/">Kinema</a> — where you can watch Ghost in the Machine</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/4733bd4e/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Short: Grand Theft Grammarly w/ Julia Angwin &amp; Peter Romer-Friedman</title>
      <itunes:episode>111</itunes:episode>
      <podcast:episode>111</podcast:episode>
      <itunes:title>Short: Grand Theft Grammarly w/ Julia Angwin &amp; Peter Romer-Friedman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7547a3cf-8364-49cd-afd2-4ea4079844f8</guid>
      <link>https://share.transistor.fm/s/25166f04</link>
      <description>
        <![CDATA[<p>Grammarly launched a feature that no one wanted and now they’re getting sued. They used the names of writers, journalists, and editors to pretend that AI versions of those people were making writing suggestions via the application. None of these ‘expert reviewers’ had any idea. Grammarly pissed off the wrong journalist.</p><p>And now Julia Angwin is suing them.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/the-toxic-relationship-between-ai-journalism-w-nic-dawes"><strong>The Toxic Relationship Between AI &amp; Journalism w/ Nic Dawes</strong></a><strong><br></strong><br></p><p>In this episode Julia (and her lawyer Peter) discuss what happened with Grammarly, why she’s suing, and how neither of them can believe that this tool made it through their legal team and into the public realm.</p><p><strong><em>Please email </em></strong><a href="mailto:info@prf-law.com"><strong><em>info@prf-law.com</em></strong></a><strong><em> for more info, or if you would like your name to be searched in the list of experts that Grammarly used for their tool.<br></em></strong><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.nytimes.com/2026/03/13/opinion/ai-doppelganger-deepfake-grammarly.html">Julia’s op ed in the New York Times</a></li><li>Pre-order Julia’s new book <a href="https://buttondown.com/JuliaAngwin/archive/my-new-book-on-courage-is-available-for-pre-order/">On Courage: How to be a Dissident in an Age of Fear</a></li><li>Check out <a href="https://themarkup.org/">The Markup</a>, founded by Julia</li><li><a href="https://www.bbc.co.uk/news/articles/cx28v08jpe7o">Grammarly pulls AI author-impersonation tool after backlash</a> — BBC 12th March 2026</li><li><a href="https://www.linkedin.com/posts/shishirmehrotra_back-in-august-we-launched-a-grammarly-agent-activity-7437552603737059328-vzTe?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAABqYHqABpyxMKgWJJY2QgfPRYgYaDc5yAfI">Shishir Mehrotra’s (CEO of Grammarly) apology on LinkedIn</a></li><li><a href="https://www.wired.com/story/grammarly-is-offering-expert-ai-reviews-from-your-favorite-authors-dead-or-alive/">Grammarly Is Offering ‘Expert’ AI Reviews From Your Favorite Authors—Dead or Alive</a> — Wired 4th March 2026</li><li><a href="https://www.theverge.com/ai-artificial-intelligence/890921/grammarly-ai-expert-reviews">Grammarly is using our identities without permission</a> — The Verge 6th March 2026</li><li><a href="https://www.platformer.news/grammarly-expert-review-reviewed/">Grammarly turned me into an AI editor against my will and I hate it</a> — Casey Newton, Platformer 9th March 2026</li><li><a href="https://prf-law.com/current-cases/class-action-alleges-that-grammarly-misappropriated-the-names-of-journalists-and-authors-through-its-expert-review">Details of the case, from PRF Law, Julia’s representative firm</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Grammarly launched a feature that no one wanted and now they’re getting sued. They used the names of writers, journalists, and editors to pretend that AI versions of those people were making writing suggestions via the application. None of these ‘expert reviewers’ had any idea. Grammarly pissed off the wrong journalist.</p><p>And now Julia Angwin is suing them.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/the-toxic-relationship-between-ai-journalism-w-nic-dawes"><strong>The Toxic Relationship Between AI &amp; Journalism w/ Nic Dawes</strong></a><strong><br></strong><br></p><p>In this episode Julia (and her lawyer Peter) discuss what happened with Grammarly, why she’s suing, and how neither of them can believe that this tool made it through their legal team and into the public realm.</p><p><strong><em>Please email </em></strong><a href="mailto:info@prf-law.com"><strong><em>info@prf-law.com</em></strong></a><strong><em> for more info, or if you would like your name to be searched in the list of experts that Grammarly used for their tool.<br></em></strong><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.nytimes.com/2026/03/13/opinion/ai-doppelganger-deepfake-grammarly.html">Julia’s op ed in the New York Times</a></li><li>Pre-order Julia’s new book <a href="https://buttondown.com/JuliaAngwin/archive/my-new-book-on-courage-is-available-for-pre-order/">On Courage: How to be a Dissident in an Age of Fear</a></li><li>Check out <a href="https://themarkup.org/">The Markup</a>, founded by Julia</li><li><a href="https://www.bbc.co.uk/news/articles/cx28v08jpe7o">Grammarly pulls AI author-impersonation tool after backlash</a> — BBC 12th March 2026</li><li><a href="https://www.linkedin.com/posts/shishirmehrotra_back-in-august-we-launched-a-grammarly-agent-activity-7437552603737059328-vzTe?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAABqYHqABpyxMKgWJJY2QgfPRYgYaDc5yAfI">Shishir Mehrotra’s (CEO of Grammarly) apology on LinkedIn</a></li><li><a href="https://www.wired.com/story/grammarly-is-offering-expert-ai-reviews-from-your-favorite-authors-dead-or-alive/">Grammarly Is Offering ‘Expert’ AI Reviews From Your Favorite Authors—Dead or Alive</a> — Wired 4th March 2026</li><li><a href="https://www.theverge.com/ai-artificial-intelligence/890921/grammarly-ai-expert-reviews">Grammarly is using our identities without permission</a> — The Verge 6th March 2026</li><li><a href="https://www.platformer.news/grammarly-expert-review-reviewed/">Grammarly turned me into an AI editor against my will and I hate it</a> — Casey Newton, Platformer 9th March 2026</li><li><a href="https://prf-law.com/current-cases/class-action-alleges-that-grammarly-misappropriated-the-names-of-journalists-and-authors-through-its-expert-review">Details of the case, from PRF Law, Julia’s representative firm</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </content:encoded>
      <pubDate>Wed, 25 Mar 2026 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/25166f04/983a3edc.mp3" length="35477936" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/MqXXWUlXMtwtCG6zXIHjI88EWlwI356LxgUC2zT5r3M/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85YTM5/YjgyNDhhYzNmN2M4/YzZkZDFkODIzZGJm/OTVlMC5wbmc.jpg"/>
      <itunes:duration>1476</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Grammarly launched a feature that no one wanted and now they’re getting sued. They used the names of writers, journalists, and editors to pretend that AI versions of those people were making writing suggestions via the application. None of these ‘expert reviewers’ had any idea. Grammarly pissed off the wrong journalist.</p><p>And now Julia Angwin is suing them.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/the-toxic-relationship-between-ai-journalism-w-nic-dawes"><strong>The Toxic Relationship Between AI &amp; Journalism w/ Nic Dawes</strong></a><strong><br></strong><br></p><p>In this episode Julia (and her lawyer Peter) discuss what happened with Grammarly, why she’s suing, and how neither of them can believe that this tool made it through their legal team and into the public realm.</p><p><strong><em>Please email </em></strong><a href="mailto:info@prf-law.com"><strong><em>info@prf-law.com</em></strong></a><strong><em> for more info, or if you would like your name to be searched in the list of experts that Grammarly used for their tool.<br></em></strong><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.nytimes.com/2026/03/13/opinion/ai-doppelganger-deepfake-grammarly.html">Julia’s op ed in the New York Times</a></li><li>Pre-order Julia’s new book <a href="https://buttondown.com/JuliaAngwin/archive/my-new-book-on-courage-is-available-for-pre-order/">On Courage: How to be a Dissident in an Age of Fear</a></li><li>Check out <a href="https://themarkup.org/">The Markup</a>, founded by Julia</li><li><a href="https://www.bbc.co.uk/news/articles/cx28v08jpe7o">Grammarly pulls AI author-impersonation tool after backlash</a> — BBC 12th March 2026</li><li><a href="https://www.linkedin.com/posts/shishirmehrotra_back-in-august-we-launched-a-grammarly-agent-activity-7437552603737059328-vzTe?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAABqYHqABpyxMKgWJJY2QgfPRYgYaDc5yAfI">Shishir Mehrotra’s (CEO of Grammarly) apology on LinkedIn</a></li><li><a href="https://www.wired.com/story/grammarly-is-offering-expert-ai-reviews-from-your-favorite-authors-dead-or-alive/">Grammarly Is Offering ‘Expert’ AI Reviews From Your Favorite Authors—Dead or Alive</a> — Wired 4th March 2026</li><li><a href="https://www.theverge.com/ai-artificial-intelligence/890921/grammarly-ai-expert-reviews">Grammarly is using our identities without permission</a> — The Verge 6th March 2026</li><li><a href="https://www.platformer.news/grammarly-expert-review-reviewed/">Grammarly turned me into an AI editor against my will and I hate it</a> — Casey Newton, Platformer 9th March 2026</li><li><a href="https://prf-law.com/current-cases/class-action-alleges-that-grammarly-misappropriated-the-names-of-journalists-and-authors-through-its-expert-review">Details of the case, from PRF Law, Julia’s representative firm</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/25166f04/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Fantasy Factory: AI Supervillains w/ Anat Shenker-Osorio</title>
      <itunes:episode>110</itunes:episode>
      <podcast:episode>110</podcast:episode>
      <itunes:title>Fantasy Factory: AI Supervillains w/ Anat Shenker-Osorio</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f58bd4f9-0b36-48f4-9906-28628db378a6</guid>
      <link>https://share.transistor.fm/s/3117e919</link>
      <description>
        <![CDATA[<p>The left has a messaging problem. Silicon Valley elites are literally making up impossible fantasies and their narratives are winning out. Why?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/the-stories-we-tell-ourselves-about-ai"><strong>The Stories we Tell Ourselves About AI</strong></a><strong><br></strong><br></p><p>This week in our second episode leading to the AI Doc, we are joined by Anat Shenker-Osorio, a progressive campaign strategist who hosts the Words To Win By podcast. Anat tries to focus on the positives: if you don’t think people should join the AI party, throw a better party. She gives us some quick lessons on messaging: how to paint tech CEOs as actual villains, how to flip the script and convince AI men that actually, it’s okay to die — and how to avoid what Anat refers to as ‘Mar-a-lago face’</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Listen to Anat’s podcast <a href="https://wordstowinby-pod.com/">Words to Win by</a></li><li><a href="https://www.penguin.co.uk/books/431235/pre-suasion-by-robert-cialdini/9781847941435">Pre-Suasion</a> and <a href="https://ia800203.us.archive.org/33/items/ThePsychologyOfPersuasion/The%20Psychology%20of%20Persuasion.pdf">Influence</a> by Robert Cialdini</li><li><a href="https://www.communitychange.org/wp-content/uploads/C3-Messaging-This-Moment-Handbook.pdf">Messaging This Moment</a> — a critical handbook for progressive comms </li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>The left has a messaging problem. Silicon Valley elites are literally making up impossible fantasies and their narratives are winning out. Why?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/the-stories-we-tell-ourselves-about-ai"><strong>The Stories we Tell Ourselves About AI</strong></a><strong><br></strong><br></p><p>This week in our second episode leading to the AI Doc, we are joined by Anat Shenker-Osorio, a progressive campaign strategist who hosts the Words To Win By podcast. Anat tries to focus on the positives: if you don’t think people should join the AI party, throw a better party. She gives us some quick lessons on messaging: how to paint tech CEOs as actual villains, how to flip the script and convince AI men that actually, it’s okay to die — and how to avoid what Anat refers to as ‘Mar-a-lago face’</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Listen to Anat’s podcast <a href="https://wordstowinby-pod.com/">Words to Win by</a></li><li><a href="https://www.penguin.co.uk/books/431235/pre-suasion-by-robert-cialdini/9781847941435">Pre-Suasion</a> and <a href="https://ia800203.us.archive.org/33/items/ThePsychologyOfPersuasion/The%20Psychology%20of%20Persuasion.pdf">Influence</a> by Robert Cialdini</li><li><a href="https://www.communitychange.org/wp-content/uploads/C3-Messaging-This-Moment-Handbook.pdf">Messaging This Moment</a> — a critical handbook for progressive comms </li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </content:encoded>
      <pubDate>Fri, 20 Mar 2026 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/3117e919/408324c9.mp3" length="75487520" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/8etv8lnW7HFrX-1gvKbrTMpHd-2faF_vNYKShM_4eXU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wZjdi/ODA3YWMzZTk4NzQ3/YjA1NWRhYWI1YTg0/ODM3MS5wbmc.jpg"/>
      <itunes:duration>3143</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>The left has a messaging problem. Silicon Valley elites are literally making up impossible fantasies and their narratives are winning out. Why?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/the-stories-we-tell-ourselves-about-ai"><strong>The Stories we Tell Ourselves About AI</strong></a><strong><br></strong><br></p><p>This week in our second episode leading to the AI Doc, we are joined by Anat Shenker-Osorio, a progressive campaign strategist who hosts the Words To Win By podcast. Anat tries to focus on the positives: if you don’t think people should join the AI party, throw a better party. She gives us some quick lessons on messaging: how to paint tech CEOs as actual villains, how to flip the script and convince AI men that actually, it’s okay to die — and how to avoid what Anat refers to as ‘Mar-a-lago face’</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Listen to Anat’s podcast <a href="https://wordstowinby-pod.com/">Words to Win by</a></li><li><a href="https://www.penguin.co.uk/books/431235/pre-suasion-by-robert-cialdini/9781847941435">Pre-Suasion</a> and <a href="https://ia800203.us.archive.org/33/items/ThePsychologyOfPersuasion/The%20Psychology%20of%20Persuasion.pdf">Influence</a> by Robert Cialdini</li><li><a href="https://www.communitychange.org/wp-content/uploads/C3-Messaging-This-Moment-Handbook.pdf">Messaging This Moment</a> — a critical handbook for progressive comms </li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/3117e919/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Fantasy Factory: AGI is Scientifically Impossible w/ Adam Becker</title>
      <itunes:episode>109</itunes:episode>
      <podcast:episode>109</podcast:episode>
      <itunes:title>Fantasy Factory: AGI is Scientifically Impossible w/ Adam Becker</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a4e935fb-5c6d-4183-aabd-20a98dd235cb</guid>
      <link>https://share.transistor.fm/s/16d45bc8</link>
      <description>
        <![CDATA[<p>Next time someone tells you that we can build data centres in space, show them this podcast episode — because it is literally impossible.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/ai-safety-s-spiral-of-urgency-w-shazeda-ahmed"><strong>AI Safety’s Spiral of Urgency w/ Shazeda Ahmed</strong></a><strong><br></strong><br></p><p>Or better yet, recommend that they buy <a href="https://www.hachettebookgroup.com/titles/adam-becker/more-everything-forever/9781541619593/">More Everything Forever</a>, Adam Becker’s latest book exploring all the fantasies and promises of coming out of Silicon Valley. This episode is the first in our Fantasy Factory series, where we explore how and why tech evangelists manufacture consent about AI’s boom, doom, and inevitability.</p><p>The futures that AI men want for us — e.g. a disembodied immortal life in AI utopia — are all scientifically impossible. Even the worse mass-extinction event on Earth would be more pleasant than trying to live on Mars. Yes, space is very cold, but it doesn’t mean we should put data centres out there! Adam explains where these narratives are coming from, who they benefit, and why they exist outside the laws of physics.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>More about <a href="https://freelanceastrophysicist.com/">Adam Becker</a></li><li>Buy <a href="https://www.hachettebookgroup.com/titles/adam-becker/more-everything-forever/9781541619593/">More Everything Forever</a></li><li><a href="https://en.wikipedia.org/wiki/For_All_Mankind_(TV_series)">For All Mankind (TV series)</a></li><li><a href="https://youtu.be/CNZI7n7OdLI?si=oAVInUDqkgwF-1qz">Our video on the Iran war</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Next time someone tells you that we can build data centres in space, show them this podcast episode — because it is literally impossible.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/ai-safety-s-spiral-of-urgency-w-shazeda-ahmed"><strong>AI Safety’s Spiral of Urgency w/ Shazeda Ahmed</strong></a><strong><br></strong><br></p><p>Or better yet, recommend that they buy <a href="https://www.hachettebookgroup.com/titles/adam-becker/more-everything-forever/9781541619593/">More Everything Forever</a>, Adam Becker’s latest book exploring all the fantasies and promises of coming out of Silicon Valley. This episode is the first in our Fantasy Factory series, where we explore how and why tech evangelists manufacture consent about AI’s boom, doom, and inevitability.</p><p>The futures that AI men want for us — e.g. a disembodied immortal life in AI utopia — are all scientifically impossible. Even the worse mass-extinction event on Earth would be more pleasant than trying to live on Mars. Yes, space is very cold, but it doesn’t mean we should put data centres out there! Adam explains where these narratives are coming from, who they benefit, and why they exist outside the laws of physics.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>More about <a href="https://freelanceastrophysicist.com/">Adam Becker</a></li><li>Buy <a href="https://www.hachettebookgroup.com/titles/adam-becker/more-everything-forever/9781541619593/">More Everything Forever</a></li><li><a href="https://en.wikipedia.org/wiki/For_All_Mankind_(TV_series)">For All Mankind (TV series)</a></li><li><a href="https://youtu.be/CNZI7n7OdLI?si=oAVInUDqkgwF-1qz">Our video on the Iran war</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </content:encoded>
      <pubDate>Fri, 13 Mar 2026 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/16d45bc8/95fce261.mp3" length="73556430" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/u9qmgcmUJkBqEEezK6623rMR8gZQy98u5xrauriQFrk/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85NjUw/NmZhOTYxZWQwY2Iy/MTU2MDE4MTYxZTc2/ZjBmZS5wbmc.jpg"/>
      <itunes:duration>3062</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Next time someone tells you that we can build data centres in space, show them this podcast episode — because it is literally impossible.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/ai-safety-s-spiral-of-urgency-w-shazeda-ahmed"><strong>AI Safety’s Spiral of Urgency w/ Shazeda Ahmed</strong></a><strong><br></strong><br></p><p>Or better yet, recommend that they buy <a href="https://www.hachettebookgroup.com/titles/adam-becker/more-everything-forever/9781541619593/">More Everything Forever</a>, Adam Becker’s latest book exploring all the fantasies and promises of coming out of Silicon Valley. This episode is the first in our Fantasy Factory series, where we explore how and why tech evangelists manufacture consent about AI’s boom, doom, and inevitability.</p><p>The futures that AI men want for us — e.g. a disembodied immortal life in AI utopia — are all scientifically impossible. Even the worse mass-extinction event on Earth would be more pleasant than trying to live on Mars. Yes, space is very cold, but it doesn’t mean we should put data centres out there! Adam explains where these narratives are coming from, who they benefit, and why they exist outside the laws of physics.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>More about <a href="https://freelanceastrophysicist.com/">Adam Becker</a></li><li>Buy <a href="https://www.hachettebookgroup.com/titles/adam-becker/more-everything-forever/9781541619593/">More Everything Forever</a></li><li><a href="https://en.wikipedia.org/wiki/For_All_Mankind_(TV_series)">For All Mankind (TV series)</a></li><li><a href="https://youtu.be/CNZI7n7OdLI?si=oAVInUDqkgwF-1qz">Our video on the Iran war</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/16d45bc8/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Livestream: The People’s Policy: Holding Big Tech Accountable </title>
      <itunes:episode>108</itunes:episode>
      <podcast:episode>108</podcast:episode>
      <itunes:title>Livestream: The People’s Policy: Holding Big Tech Accountable </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">eaec2402-1b5d-4c94-a387-006f60fb80f0</guid>
      <link>https://share.transistor.fm/s/27263b56</link>
      <description>
        <![CDATA[<p>How does an oppressed workforce organise against Big Tech employers with even bigger lobbying muscle?</p><p><strong>More like this: </strong><a href="https://saysmaybe.com/podcast/worker-power-big-tech-bossmen-w-david-seligman"><strong>Worker Power &amp; Big Tech Boss Men w/ David Seligman</strong></a></p><p>This week’s episode is a recording of our livestream from Monday: a litigator, regulator, and activist share their work and perspectives on coordinating bottom-up fights against Big Tech power, worker suppression, and unfair consumer practices. Speakers are:</p><ul><li>David Seligman, Executive Director of <a href="https://towardsjustice.org/">Towards Justice</a> and <a href="https://www.seligmanforag.com/about">Democratic candidate for Colorado Attorney General</a></li><li><a href="https://www.ftc.gov/about-ftc/commissioners-staff/alvaro-bedoya">Alvaro Bedoya</a>, former Commissioner of the Federal Trade Commission and founding director of the <a href="https://www.law.georgetown.edu/privacy-technology-center/">Center on Privacy and Technology at Georgetown University Law Center</a></li><li>Elliott “El’Bo” Awatt, Driver Organizer with <a href="https://www.cidu-cwa7777.org/">Colorado Independent Drivers United</a></li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li>Alvaro Bedoya on <a href="https://newrepublic.com/article/201171/alvaro-bedoya-ftc-became-populist">how he became a populist</a></li><li><a href="https://coloradonewsline.com/2024/10/28/lyft-uber-national-movement/">Drivers for Lyft and Uber are building a national movement</a> — Colorado Newsline 2024</li><li><a href="https://www.westword.com/news/we-tested-uber-updates-claims-under-colorado-pay-transparency-rules-23526047/">Uber Claims Transparency Law Complicates Rides and Takes Away Driver Perks – but Does It?</a> — Westword, February 2025</li><li><a href="https://www.reddit.com/r/FortCollins/comments/1qj3io1/ama_with_co_attorney_general_candidate_david/">An AMA on Reddit with David Seligman</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**<br><strong><br>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>How does an oppressed workforce organise against Big Tech employers with even bigger lobbying muscle?</p><p><strong>More like this: </strong><a href="https://saysmaybe.com/podcast/worker-power-big-tech-bossmen-w-david-seligman"><strong>Worker Power &amp; Big Tech Boss Men w/ David Seligman</strong></a></p><p>This week’s episode is a recording of our livestream from Monday: a litigator, regulator, and activist share their work and perspectives on coordinating bottom-up fights against Big Tech power, worker suppression, and unfair consumer practices. Speakers are:</p><ul><li>David Seligman, Executive Director of <a href="https://towardsjustice.org/">Towards Justice</a> and <a href="https://www.seligmanforag.com/about">Democratic candidate for Colorado Attorney General</a></li><li><a href="https://www.ftc.gov/about-ftc/commissioners-staff/alvaro-bedoya">Alvaro Bedoya</a>, former Commissioner of the Federal Trade Commission and founding director of the <a href="https://www.law.georgetown.edu/privacy-technology-center/">Center on Privacy and Technology at Georgetown University Law Center</a></li><li>Elliott “El’Bo” Awatt, Driver Organizer with <a href="https://www.cidu-cwa7777.org/">Colorado Independent Drivers United</a></li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li>Alvaro Bedoya on <a href="https://newrepublic.com/article/201171/alvaro-bedoya-ftc-became-populist">how he became a populist</a></li><li><a href="https://coloradonewsline.com/2024/10/28/lyft-uber-national-movement/">Drivers for Lyft and Uber are building a national movement</a> — Colorado Newsline 2024</li><li><a href="https://www.westword.com/news/we-tested-uber-updates-claims-under-colorado-pay-transparency-rules-23526047/">Uber Claims Transparency Law Complicates Rides and Takes Away Driver Perks – but Does It?</a> — Westword, February 2025</li><li><a href="https://www.reddit.com/r/FortCollins/comments/1qj3io1/ama_with_co_attorney_general_candidate_david/">An AMA on Reddit with David Seligman</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**<br><strong><br>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </content:encoded>
      <pubDate>Fri, 06 Mar 2026 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/27263b56/3c32543d.mp3" length="86852304" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/mfQthsyDeReTKfICSk-bWUl8EglB6eMxGifZCUNxokU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82Yzdk/NDQ3YTVlMTFlZDFj/ZjNhNzhhYWZmZTU5/ZWVkMS5wbmc.jpg"/>
      <itunes:duration>3616</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>How does an oppressed workforce organise against Big Tech employers with even bigger lobbying muscle?</p><p><strong>More like this: </strong><a href="https://saysmaybe.com/podcast/worker-power-big-tech-bossmen-w-david-seligman"><strong>Worker Power &amp; Big Tech Boss Men w/ David Seligman</strong></a></p><p>This week’s episode is a recording of our livestream from Monday: a litigator, regulator, and activist share their work and perspectives on coordinating bottom-up fights against Big Tech power, worker suppression, and unfair consumer practices. Speakers are:</p><ul><li>David Seligman, Executive Director of <a href="https://towardsjustice.org/">Towards Justice</a> and <a href="https://www.seligmanforag.com/about">Democratic candidate for Colorado Attorney General</a></li><li><a href="https://www.ftc.gov/about-ftc/commissioners-staff/alvaro-bedoya">Alvaro Bedoya</a>, former Commissioner of the Federal Trade Commission and founding director of the <a href="https://www.law.georgetown.edu/privacy-technology-center/">Center on Privacy and Technology at Georgetown University Law Center</a></li><li>Elliott “El’Bo” Awatt, Driver Organizer with <a href="https://www.cidu-cwa7777.org/">Colorado Independent Drivers United</a></li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li>Alvaro Bedoya on <a href="https://newrepublic.com/article/201171/alvaro-bedoya-ftc-became-populist">how he became a populist</a></li><li><a href="https://coloradonewsline.com/2024/10/28/lyft-uber-national-movement/">Drivers for Lyft and Uber are building a national movement</a> — Colorado Newsline 2024</li><li><a href="https://www.westword.com/news/we-tested-uber-updates-claims-under-colorado-pay-transparency-rules-23526047/">Uber Claims Transparency Law Complicates Rides and Takes Away Driver Perks – but Does It?</a> — Westword, February 2025</li><li><a href="https://www.reddit.com/r/FortCollins/comments/1qj3io1/ama_with_co_attorney_general_candidate_david/">An AMA on Reddit with David Seligman</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**<br><strong><br>Computer Says Maybe is produced by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong>, Kushal Dev, Marion Wellington, </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong>, Van Newman, and Zoe Trout</strong></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/27263b56/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Lingo Bingo at the India AI Summit w/ Naomi Klein, Timnit Gebru, Nikhil Dey, and Chinasa Okolo</title>
      <itunes:episode>107</itunes:episode>
      <podcast:episode>107</podcast:episode>
      <itunes:title>Lingo Bingo at the India AI Summit w/ Naomi Klein, Timnit Gebru, Nikhil Dey, and Chinasa Okolo</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a1c7d81a-9260-4e75-9d25-4d12f19e5115</guid>
      <link>https://share.transistor.fm/s/74287687</link>
      <description>
        <![CDATA[<p>This is the last of our series AI Lingo Bingo Series! We dig into four more co-opted concepts with four more all stars.</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/lingo-bingo-at-the-india-ai-summit-w-meredith-whittaker-audrey-tang-abeba-birhane-and-usha-ramanathan"><strong>Last week’s episode with Meredith Whittaker, Audrey Tang, Abeba Birhane, and Usha Ramanathan</strong></a><strong><br></strong><br></p><p>This week we’ll hear from Naomi Klein, who will discuss how ‘AI for Climate’ is very much not a thing; Nikhil Dey who shares all the ways powerful actors cosplay at having ‘accountability’; Timnit Gebru who explains that ‘frugal AI’ is something being made novel by the hype &amp; scale of big tech business models; and finally Chinasa Okolo who will help us better understand the complexities of ‘multilateralism’.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>More on <a href="https://www.sambhaavnaa.org/facilitators/nikhil-dey/">Nikhil Dey</a> — social activist and a founding member of the <a href="https://mkssindia.org/">Mazdoor Kisan Shakti Sangathan</a> (MKSS)</li><li>More on <a href="https://www.dair-institute.org/team/timnit-gebru/">Timnit Gebru</a> — founder of the <a href="https://www.dair-institute.org/">DAIR institute</a></li><li>More on <a href="https://naomiklein.org/">Naomi Klein</a> — author and professor of climate justice at the University of British Columbia</li><li>More on <a href="https://chinasatokolo.github.io/">Chinasa Okolo</a> — founder of <a href="https://www.technecultura.org/">Technecultura</a>, a research institute focussing on AI governance for global majority countries</li><li><a href="https://www.theguardian.com/global-development/2013/jun/13/transparency-india-nikhil-dey">The Guardian’s profile on Nikhil</a> — June 2013</li><li><a href="https://www.newtactics.org/story/right-know-right-live-building-campaign-right-information-and-accountability/">More about MKSS involvement in the Campaign for the Right to Information in India</a></li><li><a href="https://theintercept.com/2020/05/08/andrew-cuomo-eric-schmidt-coronavirus-tech-shock-doctrine/">The Screen New Deal</a> — by Naomi Klein, The Intercept, 2020</li><li><a href="https://www.canadaaction.ca/northern-gateway-pipeline-cancellation-facts">More on the cancellation of the Northern Gateway Pipeline</a></li><li><a href="https://translation.ghananlp.org/">Ghana NLP</a></li><li><a href="https://www.youtube.com/watch?v=uclzNK5MFvk&amp;list=PLUe4RjfOLVwWquUJV6xkkCo8ftU5sDWhi">Watch this week’s interviews in full on Youtube</a></li><li>RSVP to <a href="https://luma.com/7ey0m3ar">**The People's Policy: Holding Big Tech Accountable [Livestreamed Conversation + Q&amp;A]</a>** — happening on March 2nd 5:30pm MT</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Post Production by <a href="http://podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is the last of our series AI Lingo Bingo Series! We dig into four more co-opted concepts with four more all stars.</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/lingo-bingo-at-the-india-ai-summit-w-meredith-whittaker-audrey-tang-abeba-birhane-and-usha-ramanathan"><strong>Last week’s episode with Meredith Whittaker, Audrey Tang, Abeba Birhane, and Usha Ramanathan</strong></a><strong><br></strong><br></p><p>This week we’ll hear from Naomi Klein, who will discuss how ‘AI for Climate’ is very much not a thing; Nikhil Dey who shares all the ways powerful actors cosplay at having ‘accountability’; Timnit Gebru who explains that ‘frugal AI’ is something being made novel by the hype &amp; scale of big tech business models; and finally Chinasa Okolo who will help us better understand the complexities of ‘multilateralism’.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>More on <a href="https://www.sambhaavnaa.org/facilitators/nikhil-dey/">Nikhil Dey</a> — social activist and a founding member of the <a href="https://mkssindia.org/">Mazdoor Kisan Shakti Sangathan</a> (MKSS)</li><li>More on <a href="https://www.dair-institute.org/team/timnit-gebru/">Timnit Gebru</a> — founder of the <a href="https://www.dair-institute.org/">DAIR institute</a></li><li>More on <a href="https://naomiklein.org/">Naomi Klein</a> — author and professor of climate justice at the University of British Columbia</li><li>More on <a href="https://chinasatokolo.github.io/">Chinasa Okolo</a> — founder of <a href="https://www.technecultura.org/">Technecultura</a>, a research institute focussing on AI governance for global majority countries</li><li><a href="https://www.theguardian.com/global-development/2013/jun/13/transparency-india-nikhil-dey">The Guardian’s profile on Nikhil</a> — June 2013</li><li><a href="https://www.newtactics.org/story/right-know-right-live-building-campaign-right-information-and-accountability/">More about MKSS involvement in the Campaign for the Right to Information in India</a></li><li><a href="https://theintercept.com/2020/05/08/andrew-cuomo-eric-schmidt-coronavirus-tech-shock-doctrine/">The Screen New Deal</a> — by Naomi Klein, The Intercept, 2020</li><li><a href="https://www.canadaaction.ca/northern-gateway-pipeline-cancellation-facts">More on the cancellation of the Northern Gateway Pipeline</a></li><li><a href="https://translation.ghananlp.org/">Ghana NLP</a></li><li><a href="https://www.youtube.com/watch?v=uclzNK5MFvk&amp;list=PLUe4RjfOLVwWquUJV6xkkCo8ftU5sDWhi">Watch this week’s interviews in full on Youtube</a></li><li>RSVP to <a href="https://luma.com/7ey0m3ar">**The People's Policy: Holding Big Tech Accountable [Livestreamed Conversation + Q&amp;A]</a>** — happening on March 2nd 5:30pm MT</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Post Production by <a href="http://podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 27 Feb 2026 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/74287687/43542645.mp3" length="76681769" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/ytJm5C_p79g6WFnv0bluV-p7roWMiuuOlQeXzaqgRpc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xZmVl/MjVmZWE3ZGVkMGVh/NmVjZGMyNzE0YjQ2/OGUwYy5wbmc.jpg"/>
      <itunes:duration>3194</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is the last of our series AI Lingo Bingo Series! We dig into four more co-opted concepts with four more all stars.</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/lingo-bingo-at-the-india-ai-summit-w-meredith-whittaker-audrey-tang-abeba-birhane-and-usha-ramanathan"><strong>Last week’s episode with Meredith Whittaker, Audrey Tang, Abeba Birhane, and Usha Ramanathan</strong></a><strong><br></strong><br></p><p>This week we’ll hear from Naomi Klein, who will discuss how ‘AI for Climate’ is very much not a thing; Nikhil Dey who shares all the ways powerful actors cosplay at having ‘accountability’; Timnit Gebru who explains that ‘frugal AI’ is something being made novel by the hype &amp; scale of big tech business models; and finally Chinasa Okolo who will help us better understand the complexities of ‘multilateralism’.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>More on <a href="https://www.sambhaavnaa.org/facilitators/nikhil-dey/">Nikhil Dey</a> — social activist and a founding member of the <a href="https://mkssindia.org/">Mazdoor Kisan Shakti Sangathan</a> (MKSS)</li><li>More on <a href="https://www.dair-institute.org/team/timnit-gebru/">Timnit Gebru</a> — founder of the <a href="https://www.dair-institute.org/">DAIR institute</a></li><li>More on <a href="https://naomiklein.org/">Naomi Klein</a> — author and professor of climate justice at the University of British Columbia</li><li>More on <a href="https://chinasatokolo.github.io/">Chinasa Okolo</a> — founder of <a href="https://www.technecultura.org/">Technecultura</a>, a research institute focussing on AI governance for global majority countries</li><li><a href="https://www.theguardian.com/global-development/2013/jun/13/transparency-india-nikhil-dey">The Guardian’s profile on Nikhil</a> — June 2013</li><li><a href="https://www.newtactics.org/story/right-know-right-live-building-campaign-right-information-and-accountability/">More about MKSS involvement in the Campaign for the Right to Information in India</a></li><li><a href="https://theintercept.com/2020/05/08/andrew-cuomo-eric-schmidt-coronavirus-tech-shock-doctrine/">The Screen New Deal</a> — by Naomi Klein, The Intercept, 2020</li><li><a href="https://www.canadaaction.ca/northern-gateway-pipeline-cancellation-facts">More on the cancellation of the Northern Gateway Pipeline</a></li><li><a href="https://translation.ghananlp.org/">Ghana NLP</a></li><li><a href="https://www.youtube.com/watch?v=uclzNK5MFvk&amp;list=PLUe4RjfOLVwWquUJV6xkkCo8ftU5sDWhi">Watch this week’s interviews in full on Youtube</a></li><li>RSVP to <a href="https://luma.com/7ey0m3ar">**The People's Policy: Holding Big Tech Accountable [Livestreamed Conversation + Q&amp;A]</a>** — happening on March 2nd 5:30pm MT</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Post Production by <a href="http://podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/74287687/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Is Claude Out of the War Business? w/ Amos Toh</title>
      <itunes:episode>106</itunes:episode>
      <podcast:episode>106</podcast:episode>
      <itunes:title>Is Claude Out of the War Business? w/ Amos Toh</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e6be5068-8752-44b5-b29f-b60f50298848</guid>
      <link>https://share.transistor.fm/s/a7733681</link>
      <description>
        <![CDATA[<p>Anthropic’s Claude was used in the military operation to kidnap president Maduro earlier this year. Why? Unclear. Was this legal? Absolutely not.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/ai-in-gaza-live-from-mexico-city"><strong>AI In Gaza: Live from Mexico City</strong></a><strong><br></strong><br></p><p>Surprise, surprise: the DoD feels that they should able to use AI models however they want, as long as its lawful — but… was this lawful? They are now threatening to designate Anthropic as a supply chain risk. What does this all mean?</p><p>For this short, Alix was joined by <a href="https://www.brennancenter.org/about/staff/amos-toh">Amos Toh</a>, senior counsel at the <a href="https://www.brennancenter.org/">Brennan Centre for Justice</a>, to help us understand why the US defence department and an AI company are arguing about how best to us AI models for dehumanising and unjust military purposes.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.axios.com/2026/02/13/anthropic-claude-maduro-raid-pentagon">Pentagon's use of Claude during Maduro raid sparks Anthropic feud</a> — Axios, Feb 13</li><li><a href="https://thehill.com/policy/defense/5744403-anthropic-pentagon-ai-dispute/">Anthropic on shaky ground with Pentagon amid feud after Maduro raid</a> — The Hill, Feb 19</li><li><a href="https://www.reuters.com/world/americas/us-used-anthropics-claude-during-the-venezuela-raid-wsj-reports-2026-02-13/">US used Anthropic's Claude during the Venezuela raid, WSJ reports</a> — Reuters, Feb 16</li><li><a href="https://www.wsj.com/politics/national-security/pentagon-used-anthropics-claude-in-maduro-venezuela-raid-583aff17">Pentagon Used Anthropic’s Claude in Maduro Venezuela Raid</a> — WSJ, Feb 15</li><li><a href="https://bsky.app/profile/amostoh.bsky.social/post/3mfjqzqgmjs2i">Amos’s Bluesky thread sharing more thoughts on the story</a></li></ul><p><strong>Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email </strong><a href="mailto:pod@themaybe.org"><strong>pod@themaybe.org</strong></a><strong></strong></p><p>Post Production by <a href="http://podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Anthropic’s Claude was used in the military operation to kidnap president Maduro earlier this year. Why? Unclear. Was this legal? Absolutely not.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/ai-in-gaza-live-from-mexico-city"><strong>AI In Gaza: Live from Mexico City</strong></a><strong><br></strong><br></p><p>Surprise, surprise: the DoD feels that they should able to use AI models however they want, as long as its lawful — but… was this lawful? They are now threatening to designate Anthropic as a supply chain risk. What does this all mean?</p><p>For this short, Alix was joined by <a href="https://www.brennancenter.org/about/staff/amos-toh">Amos Toh</a>, senior counsel at the <a href="https://www.brennancenter.org/">Brennan Centre for Justice</a>, to help us understand why the US defence department and an AI company are arguing about how best to us AI models for dehumanising and unjust military purposes.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.axios.com/2026/02/13/anthropic-claude-maduro-raid-pentagon">Pentagon's use of Claude during Maduro raid sparks Anthropic feud</a> — Axios, Feb 13</li><li><a href="https://thehill.com/policy/defense/5744403-anthropic-pentagon-ai-dispute/">Anthropic on shaky ground with Pentagon amid feud after Maduro raid</a> — The Hill, Feb 19</li><li><a href="https://www.reuters.com/world/americas/us-used-anthropics-claude-during-the-venezuela-raid-wsj-reports-2026-02-13/">US used Anthropic's Claude during the Venezuela raid, WSJ reports</a> — Reuters, Feb 16</li><li><a href="https://www.wsj.com/politics/national-security/pentagon-used-anthropics-claude-in-maduro-venezuela-raid-583aff17">Pentagon Used Anthropic’s Claude in Maduro Venezuela Raid</a> — WSJ, Feb 15</li><li><a href="https://bsky.app/profile/amostoh.bsky.social/post/3mfjqzqgmjs2i">Amos’s Bluesky thread sharing more thoughts on the story</a></li></ul><p><strong>Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email </strong><a href="mailto:pod@themaybe.org"><strong>pod@themaybe.org</strong></a><strong></strong></p><p>Post Production by <a href="http://podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Wed, 25 Feb 2026 14:13:58 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/a7733681/af457d78.mp3" length="11670164" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/HV2SQd1iYA4tOL3ROSfKQbIj4jRq2cDGPgvTf9GHFHU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82MzIz/YmFiM2RhNjNiZjRh/NjMzNzA2MzIwOThj/ZDA1YS5wbmc.jpg"/>
      <itunes:duration>483</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Anthropic’s Claude was used in the military operation to kidnap president Maduro earlier this year. Why? Unclear. Was this legal? Absolutely not.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/ai-in-gaza-live-from-mexico-city"><strong>AI In Gaza: Live from Mexico City</strong></a><strong><br></strong><br></p><p>Surprise, surprise: the DoD feels that they should able to use AI models however they want, as long as its lawful — but… was this lawful? They are now threatening to designate Anthropic as a supply chain risk. What does this all mean?</p><p>For this short, Alix was joined by <a href="https://www.brennancenter.org/about/staff/amos-toh">Amos Toh</a>, senior counsel at the <a href="https://www.brennancenter.org/">Brennan Centre for Justice</a>, to help us understand why the US defence department and an AI company are arguing about how best to us AI models for dehumanising and unjust military purposes.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.axios.com/2026/02/13/anthropic-claude-maduro-raid-pentagon">Pentagon's use of Claude during Maduro raid sparks Anthropic feud</a> — Axios, Feb 13</li><li><a href="https://thehill.com/policy/defense/5744403-anthropic-pentagon-ai-dispute/">Anthropic on shaky ground with Pentagon amid feud after Maduro raid</a> — The Hill, Feb 19</li><li><a href="https://www.reuters.com/world/americas/us-used-anthropics-claude-during-the-venezuela-raid-wsj-reports-2026-02-13/">US used Anthropic's Claude during the Venezuela raid, WSJ reports</a> — Reuters, Feb 16</li><li><a href="https://www.wsj.com/politics/national-security/pentagon-used-anthropics-claude-in-maduro-venezuela-raid-583aff17">Pentagon Used Anthropic’s Claude in Maduro Venezuela Raid</a> — WSJ, Feb 15</li><li><a href="https://bsky.app/profile/amostoh.bsky.social/post/3mfjqzqgmjs2i">Amos’s Bluesky thread sharing more thoughts on the story</a></li></ul><p><strong>Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email </strong><a href="mailto:pod@themaybe.org"><strong>pod@themaybe.org</strong></a><strong></strong></p><p>Post Production by <a href="http://podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/a7733681/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Lingo Bingo at the India AI Summit w/ Meredith Whittaker, Audrey Tang, Abeba Birhane, and Usha Ramanathan</title>
      <itunes:episode>105</itunes:episode>
      <podcast:episode>105</podcast:episode>
      <itunes:title>Lingo Bingo at the India AI Summit w/ Meredith Whittaker, Audrey Tang, Abeba Birhane, and Usha Ramanathan</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d76c048e-1c59-4836-9ed8-56b39e3ae0fc</guid>
      <link>https://share.transistor.fm/s/f5112e0e</link>
      <description>
        <![CDATA[<p>It’s our second week of playing AI lingo bingo. The summit in India is underway and the air is thick with vague terms that fail to describe the big problems.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/lingo-bingo-at-the-india-ai-summit-w-karen-hao-joan-kinyua-chenai-chair-and-rafael-grohman"><strong>Lingo Bingo at the India AI Summit w/ Karen Hao, Joan Kinyua, Chenai Chair, and Rafael Grohmann</strong></a><strong><br></strong><br></p><p>With us this week to discuss co-opted terms is Meredith Whittaker on how ‘open source’ cannot meaninfully be applied to AI systems; Audrey Tang on ‘democratisation’, something which is both helped and harmed by AI; Abeba Birhane on everyone’s favourite slogan ‘AI for Good’; and Usha Ramanathan to discuss ‘AI and development’ in the context of the Aadhaar project in India.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>More on <a href="https://en.wikipedia.org/wiki/Usha_Ramanathan">Usha Ramanathan</a> — legal researcher and human rights activist</li><li>More on <a href="https://abebabirhane.com/">Abeba Birhane</a> — principle investigator at the <a href="https://aial.ie/">AI Accountability Lab</a> at Trinity College Dublin</li><li>More on <a href="https://en.wikipedia.org/wiki/Meredith_Whittaker">Meredith Whittaker</a> — President of Signal</li><li>More on <a href="https://en.wikipedia.org/wiki/Audrey_Tang">Audrey Tang</a> — Taiwan’s first Digital Minister</li><li><a href="https://arxiv.org/html/2512.16856v1">Distributional AGI Safety</a> — by Nenad Tomašev et al</li><li><a href="https://www.youtube.com/watch?v=uclzNK5MFvk&amp;list=PLUe4RjfOLVwWquUJV6xkkCo8ftU5sDWhi">Watch this week’s interviews in full on Youtube</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Pre Production by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong> | Post Production by </strong><a href="http://podcasts.london/"><strong>Sarah Myles</strong></a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>It’s our second week of playing AI lingo bingo. The summit in India is underway and the air is thick with vague terms that fail to describe the big problems.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/lingo-bingo-at-the-india-ai-summit-w-karen-hao-joan-kinyua-chenai-chair-and-rafael-grohman"><strong>Lingo Bingo at the India AI Summit w/ Karen Hao, Joan Kinyua, Chenai Chair, and Rafael Grohmann</strong></a><strong><br></strong><br></p><p>With us this week to discuss co-opted terms is Meredith Whittaker on how ‘open source’ cannot meaninfully be applied to AI systems; Audrey Tang on ‘democratisation’, something which is both helped and harmed by AI; Abeba Birhane on everyone’s favourite slogan ‘AI for Good’; and Usha Ramanathan to discuss ‘AI and development’ in the context of the Aadhaar project in India.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>More on <a href="https://en.wikipedia.org/wiki/Usha_Ramanathan">Usha Ramanathan</a> — legal researcher and human rights activist</li><li>More on <a href="https://abebabirhane.com/">Abeba Birhane</a> — principle investigator at the <a href="https://aial.ie/">AI Accountability Lab</a> at Trinity College Dublin</li><li>More on <a href="https://en.wikipedia.org/wiki/Meredith_Whittaker">Meredith Whittaker</a> — President of Signal</li><li>More on <a href="https://en.wikipedia.org/wiki/Audrey_Tang">Audrey Tang</a> — Taiwan’s first Digital Minister</li><li><a href="https://arxiv.org/html/2512.16856v1">Distributional AGI Safety</a> — by Nenad Tomašev et al</li><li><a href="https://www.youtube.com/watch?v=uclzNK5MFvk&amp;list=PLUe4RjfOLVwWquUJV6xkkCo8ftU5sDWhi">Watch this week’s interviews in full on Youtube</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Pre Production by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong> | Post Production by </strong><a href="http://podcasts.london/"><strong>Sarah Myles</strong></a></p>]]>
      </content:encoded>
      <pubDate>Fri, 20 Feb 2026 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/f5112e0e/04913d88.mp3" length="64196206" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/FM-dGrJtdezuJo83zrUBhrfkCu6nrISR3JHfN1QVM8Q/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84MzA2/NGMxYzA2ZDliNzRj/ZGNmZWZkYWYwMGUx/MjczMS5wbmc.jpg"/>
      <itunes:duration>2672</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>It’s our second week of playing AI lingo bingo. The summit in India is underway and the air is thick with vague terms that fail to describe the big problems.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/lingo-bingo-at-the-india-ai-summit-w-karen-hao-joan-kinyua-chenai-chair-and-rafael-grohman"><strong>Lingo Bingo at the India AI Summit w/ Karen Hao, Joan Kinyua, Chenai Chair, and Rafael Grohmann</strong></a><strong><br></strong><br></p><p>With us this week to discuss co-opted terms is Meredith Whittaker on how ‘open source’ cannot meaninfully be applied to AI systems; Audrey Tang on ‘democratisation’, something which is both helped and harmed by AI; Abeba Birhane on everyone’s favourite slogan ‘AI for Good’; and Usha Ramanathan to discuss ‘AI and development’ in the context of the Aadhaar project in India.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>More on <a href="https://en.wikipedia.org/wiki/Usha_Ramanathan">Usha Ramanathan</a> — legal researcher and human rights activist</li><li>More on <a href="https://abebabirhane.com/">Abeba Birhane</a> — principle investigator at the <a href="https://aial.ie/">AI Accountability Lab</a> at Trinity College Dublin</li><li>More on <a href="https://en.wikipedia.org/wiki/Meredith_Whittaker">Meredith Whittaker</a> — President of Signal</li><li>More on <a href="https://en.wikipedia.org/wiki/Audrey_Tang">Audrey Tang</a> — Taiwan’s first Digital Minister</li><li><a href="https://arxiv.org/html/2512.16856v1">Distributional AGI Safety</a> — by Nenad Tomašev et al</li><li><a href="https://www.youtube.com/watch?v=uclzNK5MFvk&amp;list=PLUe4RjfOLVwWquUJV6xkkCo8ftU5sDWhi">Watch this week’s interviews in full on Youtube</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Pre Production by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a><strong> | Post Production by </strong><a href="http://podcasts.london/"><strong>Sarah Myles</strong></a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/f5112e0e/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Lingo Bingo at the India AI Summit w/ Karen Hao, Joan Kinyua, Chenai Chair, and Rafael Grohmann</title>
      <itunes:episode>104</itunes:episode>
      <podcast:episode>104</podcast:episode>
      <itunes:title>Lingo Bingo at the India AI Summit w/ Karen Hao, Joan Kinyua, Chenai Chair, and Rafael Grohmann</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a9f46750-c781-4aca-bd8a-1376ded80736</guid>
      <link>https://share.transistor.fm/s/f3d846bd</link>
      <description>
        <![CDATA[<p>The AI Impact Summit in India is just a couple of days away and we are ready to drown in vague terms that kinda describe AI, and definitely obscure power. Let’s talk about how to reframe those terms…</p><p><strong>More like this: The Vaporstate: All Hail Scale at the AI India Summit<br></strong><br></p><p>We’ve partnered with the AI Now Institute and Aapti Institute to conduct twelve interviews based around the biggest and baddest terms we feel have been co-opted by global summits such as this one. This week we have Karen Hao discussing what it means to be ‘data rich’; Rafael Grohmann on the word ‘sovereignty’ and how it has a hundred definitions; Joan Kinyua on ‘human capital’, a key part of any AI development supply chain; and Chenai Chair, who will discuss ‘linguistic diversity’ — what it is, and what it isn’t.</p><p>These are just the best parts of the interviews — if you want to go deep and see each of these interviews in full, <a href="https://www.youtube.com/@TheMaybeMedia">head to our Youtube channel now</a>.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>More about <a href="https://fair.work/en/fw/about/people/rafael-grohmann/">Rafael Grohmann</a> — Assistant Professor of Media Studies with focus on Critical Platform and Data Studies at the <a href="https://www.utsc.utoronto.ca/acm/rafael-grohmann">University of Toronto</a></li><li>More about <a href="https://karendhao.com/">Karen Hao</a> — investigative journalist and author of <a href="https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/">Empire of AI</a></li><li>More about <a href="https://chenai.africa/">Chenai Chair</a> — director of the <a href="https://www.masakhane.io/masakhane-african-languages-hub">Masakhane African Languages Hub</a></li><li>More about <a href="https://data-workers.org/DLA/">Joan Kinyua</a> — president of the <a href="https://datalabelers.org/">Data Labellers Association</a></li><li>More on the <a href="https://commission.europa.eu/business-economy-euro/doing-business-eu/sustainability-due-diligence-responsible-business/corporate-sustainability-due-diligence_en">Due Diligence Act</a></li><li><a href="https://www.ey.com/en_gl/technical/tax-alerts/kenya-enacts-business-laws-amendment-act-2024">More about the amendment to the Business Laws Act 2024</a></li><li><a href="https://journals.sagepub.com/doi/abs/10.1177/1461444819865984">What does the notion of “sovereignty” mean when referring to the digital?</a> — Stephane Couture and Sophie Toupin</li><li><a href="https://transfeministech.codingrights.org/">Buy The Oracle for Transfeminist Technologies</a> by Sasha Costanza-Chock, Joana Varon, and Clara Juliano</li><li>Watch this week’s interviews in full on Youtube (link to playlist of interviews)</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Post Production by <a href="http://podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>The AI Impact Summit in India is just a couple of days away and we are ready to drown in vague terms that kinda describe AI, and definitely obscure power. Let’s talk about how to reframe those terms…</p><p><strong>More like this: The Vaporstate: All Hail Scale at the AI India Summit<br></strong><br></p><p>We’ve partnered with the AI Now Institute and Aapti Institute to conduct twelve interviews based around the biggest and baddest terms we feel have been co-opted by global summits such as this one. This week we have Karen Hao discussing what it means to be ‘data rich’; Rafael Grohmann on the word ‘sovereignty’ and how it has a hundred definitions; Joan Kinyua on ‘human capital’, a key part of any AI development supply chain; and Chenai Chair, who will discuss ‘linguistic diversity’ — what it is, and what it isn’t.</p><p>These are just the best parts of the interviews — if you want to go deep and see each of these interviews in full, <a href="https://www.youtube.com/@TheMaybeMedia">head to our Youtube channel now</a>.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>More about <a href="https://fair.work/en/fw/about/people/rafael-grohmann/">Rafael Grohmann</a> — Assistant Professor of Media Studies with focus on Critical Platform and Data Studies at the <a href="https://www.utsc.utoronto.ca/acm/rafael-grohmann">University of Toronto</a></li><li>More about <a href="https://karendhao.com/">Karen Hao</a> — investigative journalist and author of <a href="https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/">Empire of AI</a></li><li>More about <a href="https://chenai.africa/">Chenai Chair</a> — director of the <a href="https://www.masakhane.io/masakhane-african-languages-hub">Masakhane African Languages Hub</a></li><li>More about <a href="https://data-workers.org/DLA/">Joan Kinyua</a> — president of the <a href="https://datalabelers.org/">Data Labellers Association</a></li><li>More on the <a href="https://commission.europa.eu/business-economy-euro/doing-business-eu/sustainability-due-diligence-responsible-business/corporate-sustainability-due-diligence_en">Due Diligence Act</a></li><li><a href="https://www.ey.com/en_gl/technical/tax-alerts/kenya-enacts-business-laws-amendment-act-2024">More about the amendment to the Business Laws Act 2024</a></li><li><a href="https://journals.sagepub.com/doi/abs/10.1177/1461444819865984">What does the notion of “sovereignty” mean when referring to the digital?</a> — Stephane Couture and Sophie Toupin</li><li><a href="https://transfeministech.codingrights.org/">Buy The Oracle for Transfeminist Technologies</a> by Sasha Costanza-Chock, Joana Varon, and Clara Juliano</li><li>Watch this week’s interviews in full on Youtube (link to playlist of interviews)</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Post Production by <a href="http://podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 13 Feb 2026 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/f3d846bd/9732afb0.mp3" length="64197034" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/4ftgnU_8abHcAbotbbSEgnqOAqRbYwajnTtXrOl1mDc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wNTQ3/N2Q4OGIyOGNjMDg5/ZDJmYWYzMDUwN2Iz/NDI2YS5wbmc.jpg"/>
      <itunes:duration>2672</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>The AI Impact Summit in India is just a couple of days away and we are ready to drown in vague terms that kinda describe AI, and definitely obscure power. Let’s talk about how to reframe those terms…</p><p><strong>More like this: The Vaporstate: All Hail Scale at the AI India Summit<br></strong><br></p><p>We’ve partnered with the AI Now Institute and Aapti Institute to conduct twelve interviews based around the biggest and baddest terms we feel have been co-opted by global summits such as this one. This week we have Karen Hao discussing what it means to be ‘data rich’; Rafael Grohmann on the word ‘sovereignty’ and how it has a hundred definitions; Joan Kinyua on ‘human capital’, a key part of any AI development supply chain; and Chenai Chair, who will discuss ‘linguistic diversity’ — what it is, and what it isn’t.</p><p>These are just the best parts of the interviews — if you want to go deep and see each of these interviews in full, <a href="https://www.youtube.com/@TheMaybeMedia">head to our Youtube channel now</a>.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>More about <a href="https://fair.work/en/fw/about/people/rafael-grohmann/">Rafael Grohmann</a> — Assistant Professor of Media Studies with focus on Critical Platform and Data Studies at the <a href="https://www.utsc.utoronto.ca/acm/rafael-grohmann">University of Toronto</a></li><li>More about <a href="https://karendhao.com/">Karen Hao</a> — investigative journalist and author of <a href="https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/">Empire of AI</a></li><li>More about <a href="https://chenai.africa/">Chenai Chair</a> — director of the <a href="https://www.masakhane.io/masakhane-african-languages-hub">Masakhane African Languages Hub</a></li><li>More about <a href="https://data-workers.org/DLA/">Joan Kinyua</a> — president of the <a href="https://datalabelers.org/">Data Labellers Association</a></li><li>More on the <a href="https://commission.europa.eu/business-economy-euro/doing-business-eu/sustainability-due-diligence-responsible-business/corporate-sustainability-due-diligence_en">Due Diligence Act</a></li><li><a href="https://www.ey.com/en_gl/technical/tax-alerts/kenya-enacts-business-laws-amendment-act-2024">More about the amendment to the Business Laws Act 2024</a></li><li><a href="https://journals.sagepub.com/doi/abs/10.1177/1461444819865984">What does the notion of “sovereignty” mean when referring to the digital?</a> — Stephane Couture and Sophie Toupin</li><li><a href="https://transfeministech.codingrights.org/">Buy The Oracle for Transfeminist Technologies</a> by Sasha Costanza-Chock, Joana Varon, and Clara Juliano</li><li>Watch this week’s interviews in full on Youtube (link to playlist of interviews)</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Post Production by <a href="http://podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/f3d846bd/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Vaporstate: All Hail Scale at the India AI Summit</title>
      <itunes:episode>103</itunes:episode>
      <podcast:episode>103</podcast:episode>
      <itunes:title>The Vaporstate: All Hail Scale at the India AI Summit</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">20511887-fea0-4313-8523-71b2f5e9e3ee</guid>
      <link>https://share.transistor.fm/s/c5325235</link>
      <description>
        <![CDATA[<p>In The Vaporstate, we have traveled to Brazil, India, and the UK. But what does this look like as a global movement of nations and companies evangelising technology as the key to solving all problems, everywhere?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/live-show-paris-post-mortem"><strong>Paris Post-Mortem (live)</strong></a><strong><br></strong><br></p><p>For our final instalment of The Vaporstate, Alix is joined by Astha Kapoor and Amba Kak to reflect on the series, and discuss the upcoming AI Action Summit in India. This is the first time this summit is being hosted by a global majority country — will this create new opportunities for civil society to have a say, or is this just yet another chance for tech companies to whisper magic AI spells into the ear of government?</p><p>The end of The Vaporstate series marks the beginning of another series, made in partnership with AI Now and Aapti Institute: in the run up to the AI Summit, we want to rethink the terms that have been co-opted by government and industry. Terms like ‘sovereignty’, ‘AI for good’, and ‘human capital’. We interviewed twelve experts who unpack how these terms are framed in global summits like this one — watch this space for conversations with Naomi Klein, Meredith Whitaker, and Karen Hao, to name a few.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.youtube.com/watch?v=btqHDhO4h10">Mark Carney’s speech at Davos January 2026</a></li><li><a href="https://www.youtube.com/watch?v=uclzNK5MFvk&amp;list=PLUe4RjfOLVwWquUJV6xkkCo8ftU5sDWhi">Watch the first batch of interviews discussing co-opted terms used in and around the upcoming summit</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a> </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In The Vaporstate, we have traveled to Brazil, India, and the UK. But what does this look like as a global movement of nations and companies evangelising technology as the key to solving all problems, everywhere?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/live-show-paris-post-mortem"><strong>Paris Post-Mortem (live)</strong></a><strong><br></strong><br></p><p>For our final instalment of The Vaporstate, Alix is joined by Astha Kapoor and Amba Kak to reflect on the series, and discuss the upcoming AI Action Summit in India. This is the first time this summit is being hosted by a global majority country — will this create new opportunities for civil society to have a say, or is this just yet another chance for tech companies to whisper magic AI spells into the ear of government?</p><p>The end of The Vaporstate series marks the beginning of another series, made in partnership with AI Now and Aapti Institute: in the run up to the AI Summit, we want to rethink the terms that have been co-opted by government and industry. Terms like ‘sovereignty’, ‘AI for good’, and ‘human capital’. We interviewed twelve experts who unpack how these terms are framed in global summits like this one — watch this space for conversations with Naomi Klein, Meredith Whitaker, and Karen Hao, to name a few.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.youtube.com/watch?v=btqHDhO4h10">Mark Carney’s speech at Davos January 2026</a></li><li><a href="https://www.youtube.com/watch?v=uclzNK5MFvk&amp;list=PLUe4RjfOLVwWquUJV6xkkCo8ftU5sDWhi">Watch the first batch of interviews discussing co-opted terms used in and around the upcoming summit</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a> </p>]]>
      </content:encoded>
      <pubDate>Fri, 06 Feb 2026 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/c5325235/a53bfa74.mp3" length="68961704" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/p57cRw6HfAf5l1Mu67Xhuz5sQxVTBmgjYaBM0SpSYqs/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jNTA4/YWVmYWY4ZWY1NWIx/YWVlODlmOWU2ZTJm/M2JiZi5wbmc.jpg"/>
      <itunes:duration>2871</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In The Vaporstate, we have traveled to Brazil, India, and the UK. But what does this look like as a global movement of nations and companies evangelising technology as the key to solving all problems, everywhere?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/live-show-paris-post-mortem"><strong>Paris Post-Mortem (live)</strong></a><strong><br></strong><br></p><p>For our final instalment of The Vaporstate, Alix is joined by Astha Kapoor and Amba Kak to reflect on the series, and discuss the upcoming AI Action Summit in India. This is the first time this summit is being hosted by a global majority country — will this create new opportunities for civil society to have a say, or is this just yet another chance for tech companies to whisper magic AI spells into the ear of government?</p><p>The end of The Vaporstate series marks the beginning of another series, made in partnership with AI Now and Aapti Institute: in the run up to the AI Summit, we want to rethink the terms that have been co-opted by government and industry. Terms like ‘sovereignty’, ‘AI for good’, and ‘human capital’. We interviewed twelve experts who unpack how these terms are framed in global summits like this one — watch this space for conversations with Naomi Klein, Meredith Whitaker, and Karen Hao, to name a few.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.youtube.com/watch?v=btqHDhO4h10">Mark Carney’s speech at Davos January 2026</a></li><li><a href="https://www.youtube.com/watch?v=uclzNK5MFvk&amp;list=PLUe4RjfOLVwWquUJV6xkkCo8ftU5sDWhi">Watch the first batch of interviews discussing co-opted terms used in and around the upcoming summit</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a> </p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/c5325235/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Vaporstate: Buy Blair, Sell AI</title>
      <itunes:episode>102</itunes:episode>
      <podcast:episode>102</podcast:episode>
      <itunes:title>The Vaporstate: Buy Blair, Sell AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3f3a239d-963b-4cc0-b2a9-9a676088d303</guid>
      <link>https://share.transistor.fm/s/19b60dd4</link>
      <description>
        <![CDATA[<p>What does US tech billionaire Larry Ellison get when he gives the Tony Blair Institute hundreds of millions of dollars?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/the-vaporstate-id-in-india"><strong>The Vaporstate: ID in India</strong></a><strong><br></strong><br></p><p>In our third installment of The Vaporstate, we are joined by two journalists from Lighthouse Reports, who tell all about their investigation into the questionable relationship between Oracle founder Larry Ellison, the Tony Blair Institute, and the current Labour government. What is the Tony Blair Institute and why did Ellison give them millions of dollars? What does any of this have to do with national IDs and NHS data? And if you’re a government official somewhere around the world, and TBI comes knocking to sell you an AI future what you should do…?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.thetimes.com/uk/politics/article/britain-must-treat-tech-giants-like-nation-states-minister-warns-ktmm5vmc9?gaa_at=eafs&amp;gaa_n=AWEtsqfCMN2BLEIuX36Az9cIpAIUcGE-WAkVYZIP1XG4Lvq6mwGk8HfryKfqAXf7UKk%3D&amp;gaa_ts=6936fd24&amp;gaa_sig=-DAz63J-5vWfMhxKVTYQGYR3mk4TYecMWXlDqz_8v2SsOZfff70qaXSSew1VX9q7DqswqfyRFdrJTPGEpJnfew%3D%3D">Britain must treat tech giants like nation states</a> — The Times, 2024</li><li><a href="https://www.lighthousereports.com/backlight/">Backlight</a> — Lighthouse’s monthly podcast</li><li><a href="https://www.lighthousereports.com/investigation/blair-and-the-billionaire/">Blair and the Billionaire</a> — Lighthouse Reports 2025</li><li><a href="https://www.youtube.com/watch?v=QwCsQYUFuEE&amp;t=1017s">Inside The Tony Blair Institute</a> — Lighthouse journalists Peter Geoghegan and May Bulman are interviewed by The New Statesman</li><li>Do you have a story that you think is missing from public discourse? <a href="https://www.lighthousereports.com/got-a-story/">Here’s how to get in touch with Lighthouse</a></li><li><a href="https://csays.notion.site/Questions-proposed-by-Alix-for-Sundance-panel-before-they-removed-her-from-the-panel-2f76ec2423b68000b6e6dc03b1f8e0ed?pvs=74">Questions Alix proposed for the Sundance panel</a> — as mentioned in the intro.</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a> </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>What does US tech billionaire Larry Ellison get when he gives the Tony Blair Institute hundreds of millions of dollars?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/the-vaporstate-id-in-india"><strong>The Vaporstate: ID in India</strong></a><strong><br></strong><br></p><p>In our third installment of The Vaporstate, we are joined by two journalists from Lighthouse Reports, who tell all about their investigation into the questionable relationship between Oracle founder Larry Ellison, the Tony Blair Institute, and the current Labour government. What is the Tony Blair Institute and why did Ellison give them millions of dollars? What does any of this have to do with national IDs and NHS data? And if you’re a government official somewhere around the world, and TBI comes knocking to sell you an AI future what you should do…?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.thetimes.com/uk/politics/article/britain-must-treat-tech-giants-like-nation-states-minister-warns-ktmm5vmc9?gaa_at=eafs&amp;gaa_n=AWEtsqfCMN2BLEIuX36Az9cIpAIUcGE-WAkVYZIP1XG4Lvq6mwGk8HfryKfqAXf7UKk%3D&amp;gaa_ts=6936fd24&amp;gaa_sig=-DAz63J-5vWfMhxKVTYQGYR3mk4TYecMWXlDqz_8v2SsOZfff70qaXSSew1VX9q7DqswqfyRFdrJTPGEpJnfew%3D%3D">Britain must treat tech giants like nation states</a> — The Times, 2024</li><li><a href="https://www.lighthousereports.com/backlight/">Backlight</a> — Lighthouse’s monthly podcast</li><li><a href="https://www.lighthousereports.com/investigation/blair-and-the-billionaire/">Blair and the Billionaire</a> — Lighthouse Reports 2025</li><li><a href="https://www.youtube.com/watch?v=QwCsQYUFuEE&amp;t=1017s">Inside The Tony Blair Institute</a> — Lighthouse journalists Peter Geoghegan and May Bulman are interviewed by The New Statesman</li><li>Do you have a story that you think is missing from public discourse? <a href="https://www.lighthousereports.com/got-a-story/">Here’s how to get in touch with Lighthouse</a></li><li><a href="https://csays.notion.site/Questions-proposed-by-Alix-for-Sundance-panel-before-they-removed-her-from-the-panel-2f76ec2423b68000b6e6dc03b1f8e0ed?pvs=74">Questions Alix proposed for the Sundance panel</a> — as mentioned in the intro.</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a> </p>]]>
      </content:encoded>
      <pubDate>Fri, 30 Jan 2026 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/19b60dd4/c4c30c18.mp3" length="64873749" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/RN7nuyKhGPGx-VpO0XrLPToqw7KWQai7C7G4KcPo6S0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82MDA4/M2VjNzg4MTJlMjdl/YzQzMzFiNDdlYWZj/ZjI5YS5wbmc.jpg"/>
      <itunes:duration>2701</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>What does US tech billionaire Larry Ellison get when he gives the Tony Blair Institute hundreds of millions of dollars?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/the-vaporstate-id-in-india"><strong>The Vaporstate: ID in India</strong></a><strong><br></strong><br></p><p>In our third installment of The Vaporstate, we are joined by two journalists from Lighthouse Reports, who tell all about their investigation into the questionable relationship between Oracle founder Larry Ellison, the Tony Blair Institute, and the current Labour government. What is the Tony Blair Institute and why did Ellison give them millions of dollars? What does any of this have to do with national IDs and NHS data? And if you’re a government official somewhere around the world, and TBI comes knocking to sell you an AI future what you should do…?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.thetimes.com/uk/politics/article/britain-must-treat-tech-giants-like-nation-states-minister-warns-ktmm5vmc9?gaa_at=eafs&amp;gaa_n=AWEtsqfCMN2BLEIuX36Az9cIpAIUcGE-WAkVYZIP1XG4Lvq6mwGk8HfryKfqAXf7UKk%3D&amp;gaa_ts=6936fd24&amp;gaa_sig=-DAz63J-5vWfMhxKVTYQGYR3mk4TYecMWXlDqz_8v2SsOZfff70qaXSSew1VX9q7DqswqfyRFdrJTPGEpJnfew%3D%3D">Britain must treat tech giants like nation states</a> — The Times, 2024</li><li><a href="https://www.lighthousereports.com/backlight/">Backlight</a> — Lighthouse’s monthly podcast</li><li><a href="https://www.lighthousereports.com/investigation/blair-and-the-billionaire/">Blair and the Billionaire</a> — Lighthouse Reports 2025</li><li><a href="https://www.youtube.com/watch?v=QwCsQYUFuEE&amp;t=1017s">Inside The Tony Blair Institute</a> — Lighthouse journalists Peter Geoghegan and May Bulman are interviewed by The New Statesman</li><li>Do you have a story that you think is missing from public discourse? <a href="https://www.lighthousereports.com/got-a-story/">Here’s how to get in touch with Lighthouse</a></li><li><a href="https://csays.notion.site/Questions-proposed-by-Alix-for-Sundance-panel-before-they-removed-her-from-the-panel-2f76ec2423b68000b6e6dc03b1f8e0ed?pvs=74">Questions Alix proposed for the Sundance panel</a> — as mentioned in the intro.</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a> </p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/19b60dd4/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Vaporstate: Brazil is Banking on Apps</title>
      <itunes:episode>101</itunes:episode>
      <podcast:episode>101</podcast:episode>
      <itunes:title>The Vaporstate: Brazil is Banking on Apps</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9271e01e-9e8d-423e-a3b2-5f4ae317ce01</guid>
      <link>https://share.transistor.fm/s/7f1e454d</link>
      <description>
        <![CDATA[<p>What happens when the very way you prove who you are is stolen by someone else? And what happens when a country stands up to Meta and builds their own.</p><p><strong>More like this: The Vaporstate: ID in India<br></strong><br></p><p>For episode two of The Vaporstate, Alix is joined by Rafael Zanatta and Luã Cruz<a href="http://gov.br">.</a> Rafa walks us through the incredible story of how his mom’s digital ID was stolen and a clever bank teller stopped someone from halfway across the country from stealing her savings. Luã ****shares the geopolitical battles that prevented a Meta takeover of Brazilian peer-to-peer payment and the roll out of a frictionless financial transaction system called PIX.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.forbes.com/sites/angelicamarideoliveira/2024/03/04/brazil-has-the-worlds-most-accessed-online-citizen-services-platform/">Brazil Has The World’s Most Accessed Citizen Services Platform</a> — Forbes, March 2024</li><li>More about <a href="https://www.dataprivacybr.org/en/">Data Privacy Brazil</a></li><li><a href="https://www.nytimes.com/2018/02/16/opinion/sunday/tyranny-convenience.html">The Tyranny of Convenience</a> by Tim Wu — New York Times, 2018</li><li><a href="https://en.wikipedia.org/wiki/Byung-Chul_Han#Works_in_English">Works by South Korean philosopher Byung-Chul Han</a></li><li><a href="https://apnews.com/article/brazil-bolsonaro-hospital-appointment-house-arrest-conviction-64c625ab82f5722ea114a0b508332125">Lula pushing back against Trump’s tariffs</a> — AP, September 2025</li><li>More about <a href="https://www.consumersinternational.org/members/members/instituto-brasileiro-de-defesa-do-consumidor-idec/">The Brazilian Institute for Consumer Protection</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>What happens when the very way you prove who you are is stolen by someone else? And what happens when a country stands up to Meta and builds their own.</p><p><strong>More like this: The Vaporstate: ID in India<br></strong><br></p><p>For episode two of The Vaporstate, Alix is joined by Rafael Zanatta and Luã Cruz<a href="http://gov.br">.</a> Rafa walks us through the incredible story of how his mom’s digital ID was stolen and a clever bank teller stopped someone from halfway across the country from stealing her savings. Luã ****shares the geopolitical battles that prevented a Meta takeover of Brazilian peer-to-peer payment and the roll out of a frictionless financial transaction system called PIX.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.forbes.com/sites/angelicamarideoliveira/2024/03/04/brazil-has-the-worlds-most-accessed-online-citizen-services-platform/">Brazil Has The World’s Most Accessed Citizen Services Platform</a> — Forbes, March 2024</li><li>More about <a href="https://www.dataprivacybr.org/en/">Data Privacy Brazil</a></li><li><a href="https://www.nytimes.com/2018/02/16/opinion/sunday/tyranny-convenience.html">The Tyranny of Convenience</a> by Tim Wu — New York Times, 2018</li><li><a href="https://en.wikipedia.org/wiki/Byung-Chul_Han#Works_in_English">Works by South Korean philosopher Byung-Chul Han</a></li><li><a href="https://apnews.com/article/brazil-bolsonaro-hospital-appointment-house-arrest-conviction-64c625ab82f5722ea114a0b508332125">Lula pushing back against Trump’s tariffs</a> — AP, September 2025</li><li>More about <a href="https://www.consumersinternational.org/members/members/instituto-brasileiro-de-defesa-do-consumidor-idec/">The Brazilian Institute for Consumer Protection</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 23 Jan 2026 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/7f1e454d/193c2e90.mp3" length="77869050" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/dzV_UoALr2XFEjukmDokFCbb9lyRgtJ8faxVS7ZMSrQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jYmIx/ODNkNzcxMTI1ZTY5/MGU3OThlODNjMzJh/YzU0Ny5wbmc.jpg"/>
      <itunes:duration>3241</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>What happens when the very way you prove who you are is stolen by someone else? And what happens when a country stands up to Meta and builds their own.</p><p><strong>More like this: The Vaporstate: ID in India<br></strong><br></p><p>For episode two of The Vaporstate, Alix is joined by Rafael Zanatta and Luã Cruz<a href="http://gov.br">.</a> Rafa walks us through the incredible story of how his mom’s digital ID was stolen and a clever bank teller stopped someone from halfway across the country from stealing her savings. Luã ****shares the geopolitical battles that prevented a Meta takeover of Brazilian peer-to-peer payment and the roll out of a frictionless financial transaction system called PIX.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.forbes.com/sites/angelicamarideoliveira/2024/03/04/brazil-has-the-worlds-most-accessed-online-citizen-services-platform/">Brazil Has The World’s Most Accessed Citizen Services Platform</a> — Forbes, March 2024</li><li>More about <a href="https://www.dataprivacybr.org/en/">Data Privacy Brazil</a></li><li><a href="https://www.nytimes.com/2018/02/16/opinion/sunday/tyranny-convenience.html">The Tyranny of Convenience</a> by Tim Wu — New York Times, 2018</li><li><a href="https://en.wikipedia.org/wiki/Byung-Chul_Han#Works_in_English">Works by South Korean philosopher Byung-Chul Han</a></li><li><a href="https://apnews.com/article/brazil-bolsonaro-hospital-appointment-house-arrest-conviction-64c625ab82f5722ea114a0b508332125">Lula pushing back against Trump’s tariffs</a> — AP, September 2025</li><li>More about <a href="https://www.consumersinternational.org/members/members/instituto-brasileiro-de-defesa-do-consumidor-idec/">The Brazilian Institute for Consumer Protection</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/7f1e454d/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Vaporstate: ID in India</title>
      <itunes:episode>100</itunes:episode>
      <podcast:episode>100</podcast:episode>
      <itunes:title>The Vaporstate: ID in India</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5de888e7-e3b4-42d4-940e-75d4ab7909da</guid>
      <link>https://share.transistor.fm/s/85bcd6f5</link>
      <description>
        <![CDATA[<p>Our first exploration of The Vaporstate takes us to India, home of Aadhaar: a mammoth digitisation project that charts a path from technical solution for public service delivery, through mission creep and popular opposition, to a knotty but inescapable part of Indian existence today.</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/is-digitisation-killing-democracy-w-marietje-schaake"><strong>Is Digitisation Killing Democracy? w/ Marietje Schaake</strong></a><strong><br></strong><br></p><p>Joining Alix for part one of The Vaporstate is Mila Samdub, Astha Kapoor, and Usha Ramanathan. Together they discuss the conception of Aadhaar, India’s key piece of digital public infrastructure, and how it morphed from a simple digital ID to something that unifies payments, phone plans, and biometrics.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>More about <a href="https://en.wikipedia.org/wiki/Usha_Ramanathan">Usha Ramanathan</a> — legendary lawyer and activist who has been pushing back on the Aadhaar programme for over a decade</li><li>More about <a href="https://ostromworkshop.indiana.edu/about/collaborators/kapoor-astha.html">Astha Kapoor</a> — co-founder of the Aapti Institute</li><li>More about <a href="https://mila.zone/">Mila Samdub</a> — designer, writer, and <a href="https://cyberbrics.info/">CyberBRICS Fellow</a> at the Center for Internet and Society, FGV Rio</li><li><a href="https://www.nature.com/articles/s41586-025-08972-6">Computer-vision research powers surveillance technology</a> — by Abeba Birhane et al, Nature Journal</li><li><a href="https://www.youtube.com/watch?v=uiBgGNZ42to">Aadhaar 2.0 workshop</a></li><li><a href="https://timesofindia.indiatimes.com/business/india-business/walmart-takes-ownership-of-phonepe-from-flipkart/articleshow/96468195.cms">Walmart Takes Ownership of PhonePe from Flipkart</a> — The Times of India, 2022</li><li><a href="https://techcrunch.com/2023/03/16/walmart-invests-200-million-in-indian-mobile-payments-giant-phonepe/">Walmart invests $200 million in Indian mobile payments giant PhonePe</a> — TechCrunch, 2023</li><li><a href="https://www.bbc.co.uk/news/technology-41306312">Google launches India mobile payments app Tez</a> — BBC, 2017</li><li><a href="https://www.nber.org/system/files/working_papers/w26744/w26744.pdf">Identity Verification Standards in Welfare Programs: Experimental Evidence from India</a> — Karthik Muralidharan et al, National Bureau of Economic Research, 2021</li><li><a href="https://www.epw.in/journal/2024/19/insight/aadhaar-costs-digital-red-tape.html">Aadhaar: Costs of Digital Red Tape </a>— Reetika Khera &amp; Amod Moharil, Economic &amp; Political Weekly, 2024</li><li><a href="https://networkcultures.org/blog/publication/overload-creep-excess-an-internet-from-india/">Overload, Creep, Excess – An Internet from India</a> — Nafis Hasan et al, Institute of Network Cultures, 2022</li><li><a href="https://shankkaraiyar.com/books/aadhaar/">Aadhaar: A Biometric History of India's 12 Digit Revolution</a> — Shankkar Aiyar, Westland, 2017</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Our first exploration of The Vaporstate takes us to India, home of Aadhaar: a mammoth digitisation project that charts a path from technical solution for public service delivery, through mission creep and popular opposition, to a knotty but inescapable part of Indian existence today.</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/is-digitisation-killing-democracy-w-marietje-schaake"><strong>Is Digitisation Killing Democracy? w/ Marietje Schaake</strong></a><strong><br></strong><br></p><p>Joining Alix for part one of The Vaporstate is Mila Samdub, Astha Kapoor, and Usha Ramanathan. Together they discuss the conception of Aadhaar, India’s key piece of digital public infrastructure, and how it morphed from a simple digital ID to something that unifies payments, phone plans, and biometrics.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>More about <a href="https://en.wikipedia.org/wiki/Usha_Ramanathan">Usha Ramanathan</a> — legendary lawyer and activist who has been pushing back on the Aadhaar programme for over a decade</li><li>More about <a href="https://ostromworkshop.indiana.edu/about/collaborators/kapoor-astha.html">Astha Kapoor</a> — co-founder of the Aapti Institute</li><li>More about <a href="https://mila.zone/">Mila Samdub</a> — designer, writer, and <a href="https://cyberbrics.info/">CyberBRICS Fellow</a> at the Center for Internet and Society, FGV Rio</li><li><a href="https://www.nature.com/articles/s41586-025-08972-6">Computer-vision research powers surveillance technology</a> — by Abeba Birhane et al, Nature Journal</li><li><a href="https://www.youtube.com/watch?v=uiBgGNZ42to">Aadhaar 2.0 workshop</a></li><li><a href="https://timesofindia.indiatimes.com/business/india-business/walmart-takes-ownership-of-phonepe-from-flipkart/articleshow/96468195.cms">Walmart Takes Ownership of PhonePe from Flipkart</a> — The Times of India, 2022</li><li><a href="https://techcrunch.com/2023/03/16/walmart-invests-200-million-in-indian-mobile-payments-giant-phonepe/">Walmart invests $200 million in Indian mobile payments giant PhonePe</a> — TechCrunch, 2023</li><li><a href="https://www.bbc.co.uk/news/technology-41306312">Google launches India mobile payments app Tez</a> — BBC, 2017</li><li><a href="https://www.nber.org/system/files/working_papers/w26744/w26744.pdf">Identity Verification Standards in Welfare Programs: Experimental Evidence from India</a> — Karthik Muralidharan et al, National Bureau of Economic Research, 2021</li><li><a href="https://www.epw.in/journal/2024/19/insight/aadhaar-costs-digital-red-tape.html">Aadhaar: Costs of Digital Red Tape </a>— Reetika Khera &amp; Amod Moharil, Economic &amp; Political Weekly, 2024</li><li><a href="https://networkcultures.org/blog/publication/overload-creep-excess-an-internet-from-india/">Overload, Creep, Excess – An Internet from India</a> — Nafis Hasan et al, Institute of Network Cultures, 2022</li><li><a href="https://shankkaraiyar.com/books/aadhaar/">Aadhaar: A Biometric History of India's 12 Digit Revolution</a> — Shankkar Aiyar, Westland, 2017</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 16 Jan 2026 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/85bcd6f5/cd056cd2.mp3" length="71408569" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/1PBCYpvwEiohoRQ81IS9Rp4jO4aPbOOBV1whYVLod6w/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85NjBk/ZmJiMTQ4YjI3OTRm/NDllYzRhZTNmM2E2/YzA1Mi5wbmc.jpg"/>
      <itunes:duration>3029</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Our first exploration of The Vaporstate takes us to India, home of Aadhaar: a mammoth digitisation project that charts a path from technical solution for public service delivery, through mission creep and popular opposition, to a knotty but inescapable part of Indian existence today.</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/is-digitisation-killing-democracy-w-marietje-schaake"><strong>Is Digitisation Killing Democracy? w/ Marietje Schaake</strong></a><strong><br></strong><br></p><p>Joining Alix for part one of The Vaporstate is Mila Samdub, Astha Kapoor, and Usha Ramanathan. Together they discuss the conception of Aadhaar, India’s key piece of digital public infrastructure, and how it morphed from a simple digital ID to something that unifies payments, phone plans, and biometrics.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>More about <a href="https://en.wikipedia.org/wiki/Usha_Ramanathan">Usha Ramanathan</a> — legendary lawyer and activist who has been pushing back on the Aadhaar programme for over a decade</li><li>More about <a href="https://ostromworkshop.indiana.edu/about/collaborators/kapoor-astha.html">Astha Kapoor</a> — co-founder of the Aapti Institute</li><li>More about <a href="https://mila.zone/">Mila Samdub</a> — designer, writer, and <a href="https://cyberbrics.info/">CyberBRICS Fellow</a> at the Center for Internet and Society, FGV Rio</li><li><a href="https://www.nature.com/articles/s41586-025-08972-6">Computer-vision research powers surveillance technology</a> — by Abeba Birhane et al, Nature Journal</li><li><a href="https://www.youtube.com/watch?v=uiBgGNZ42to">Aadhaar 2.0 workshop</a></li><li><a href="https://timesofindia.indiatimes.com/business/india-business/walmart-takes-ownership-of-phonepe-from-flipkart/articleshow/96468195.cms">Walmart Takes Ownership of PhonePe from Flipkart</a> — The Times of India, 2022</li><li><a href="https://techcrunch.com/2023/03/16/walmart-invests-200-million-in-indian-mobile-payments-giant-phonepe/">Walmart invests $200 million in Indian mobile payments giant PhonePe</a> — TechCrunch, 2023</li><li><a href="https://www.bbc.co.uk/news/technology-41306312">Google launches India mobile payments app Tez</a> — BBC, 2017</li><li><a href="https://www.nber.org/system/files/working_papers/w26744/w26744.pdf">Identity Verification Standards in Welfare Programs: Experimental Evidence from India</a> — Karthik Muralidharan et al, National Bureau of Economic Research, 2021</li><li><a href="https://www.epw.in/journal/2024/19/insight/aadhaar-costs-digital-red-tape.html">Aadhaar: Costs of Digital Red Tape </a>— Reetika Khera &amp; Amod Moharil, Economic &amp; Political Weekly, 2024</li><li><a href="https://networkcultures.org/blog/publication/overload-creep-excess-an-internet-from-india/">Overload, Creep, Excess – An Internet from India</a> — Nafis Hasan et al, Institute of Network Cultures, 2022</li><li><a href="https://shankkaraiyar.com/books/aadhaar/">Aadhaar: A Biometric History of India's 12 Digit Revolution</a> — Shankkar Aiyar, Westland, 2017</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/85bcd6f5/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Vaporstate: A New Mini-Series</title>
      <itunes:episode>99</itunes:episode>
      <podcast:episode>99</podcast:episode>
      <itunes:title>The Vaporstate: A New Mini-Series</itunes:title>
      <itunes:episodeType>trailer</itunes:episodeType>
      <guid isPermaLink="false">88cb2330-3d21-4174-9724-22530d335727</guid>
      <link>https://share.transistor.fm/s/de7adda8</link>
      <description>
        <![CDATA[<p>This is The Vaporstate, a new series on the worldwide government bonanza of enthusiastic digitisation: Digital IDs, digital payment systems, massive data exchange platforms. What are the every-day impacts of these digitisation projects, and why now?</p><p>The Vaporstate is a deep exploration of digital public infrastructure: we will hear from the journalists, civil society groups, and lawyers from around the world who are watching these projects develop, and how this digital scaffolding shapes our lives.</p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is The Vaporstate, a new series on the worldwide government bonanza of enthusiastic digitisation: Digital IDs, digital payment systems, massive data exchange platforms. What are the every-day impacts of these digitisation projects, and why now?</p><p>The Vaporstate is a deep exploration of digital public infrastructure: we will hear from the journalists, civil society groups, and lawyers from around the world who are watching these projects develop, and how this digital scaffolding shapes our lives.</p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 09 Jan 2026 00:05:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/de7adda8/cf98c490.mp3" length="3259884" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/QA1h3yD7dE0TZrUFyZeP2Uy5WoZ7xwCfA-8mgvD_iB8/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTcy/NWU1OTZlYjNlNzA5/MGFlMTEyMjQzNTQy/MjM4OS5wbmc.jpg"/>
      <itunes:duration>133</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is The Vaporstate, a new series on the worldwide government bonanza of enthusiastic digitisation: Digital IDs, digital payment systems, massive data exchange platforms. What are the every-day impacts of these digitisation projects, and why now?</p><p>The Vaporstate is a deep exploration of digital public infrastructure: we will hear from the journalists, civil society groups, and lawyers from around the world who are watching these projects develop, and how this digital scaffolding shapes our lives.</p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/de7adda8/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Age of Noise w/ Eryk Salvaggio (replay)</title>
      <itunes:episode>98</itunes:episode>
      <podcast:episode>98</podcast:episode>
      <itunes:title>The Age of Noise w/ Eryk Salvaggio (replay)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ec86f56d-fa87-434d-abf7-ce8963768d9d</guid>
      <link>https://share.transistor.fm/s/57a0e1c7</link>
      <description>
        <![CDATA[<p>Infinite AI slop means we are moving away from our the age of information into what Eryk Salvaggio calls ‘the age of noise’.</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/straight-to-video-from-rodney-king-to-sora-w-sam-gregory"><strong>Straight to Video: From Rodney King to Sora w/ Sam Gregory</strong></a><strong><br></strong><br></p><p>We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!</p><p>What happens if you ask a generative AI image model to show you what Picasso’s work would have looked like if he lived in Japan in the 16th century? Would it produce something totally new, or just mash together stereotypical aesthetics from Picasso’s work, and 16th century Japan? Can generative AI really create anything new if it can only draw from existing imagery?</p><p>Further reading:</p><ul><li><a href="https://mail.cyberneticforests.com/what-i-read-about-ai-in-2025/">What I Read About AI in 2025</a> — by Eryk Salvaggio</li><li>Visit <a href="https://cyberneticforests.com/">Eryk’s Website</a></li><li><a href="https://mail.cyberneticforests.com/">Cybernetic Forests</a> — Eryk’s newsletter on tech and culture</li></ul><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Infinite AI slop means we are moving away from our the age of information into what Eryk Salvaggio calls ‘the age of noise’.</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/straight-to-video-from-rodney-king-to-sora-w-sam-gregory"><strong>Straight to Video: From Rodney King to Sora w/ Sam Gregory</strong></a><strong><br></strong><br></p><p>We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!</p><p>What happens if you ask a generative AI image model to show you what Picasso’s work would have looked like if he lived in Japan in the 16th century? Would it produce something totally new, or just mash together stereotypical aesthetics from Picasso’s work, and 16th century Japan? Can generative AI really create anything new if it can only draw from existing imagery?</p><p>Further reading:</p><ul><li><a href="https://mail.cyberneticforests.com/what-i-read-about-ai-in-2025/">What I Read About AI in 2025</a> — by Eryk Salvaggio</li><li>Visit <a href="https://cyberneticforests.com/">Eryk’s Website</a></li><li><a href="https://mail.cyberneticforests.com/">Cybernetic Forests</a> — Eryk’s newsletter on tech and culture</li></ul><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 09 Jan 2026 00:01:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/57a0e1c7/bf1baeea.mp3" length="70522590" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/TXP9mgfgAgV7mOYjLDXx6yWCZsgxO3cP6SKNYrMRArI/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iMWRm/ODQwMDUyYmEyNGZm/ZDlkYmM3NmM4NWI3/MzJhMS5wbmc.jpg"/>
      <itunes:duration>2936</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Infinite AI slop means we are moving away from our the age of information into what Eryk Salvaggio calls ‘the age of noise’.</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/straight-to-video-from-rodney-king-to-sora-w-sam-gregory"><strong>Straight to Video: From Rodney King to Sora w/ Sam Gregory</strong></a><strong><br></strong><br></p><p>We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!</p><p>What happens if you ask a generative AI image model to show you what Picasso’s work would have looked like if he lived in Japan in the 16th century? Would it produce something totally new, or just mash together stereotypical aesthetics from Picasso’s work, and 16th century Japan? Can generative AI really create anything new if it can only draw from existing imagery?</p><p>Further reading:</p><ul><li><a href="https://mail.cyberneticforests.com/what-i-read-about-ai-in-2025/">What I Read About AI in 2025</a> — by Eryk Salvaggio</li><li>Visit <a href="https://cyberneticforests.com/">Eryk’s Website</a></li><li><a href="https://mail.cyberneticforests.com/">Cybernetic Forests</a> — Eryk’s newsletter on tech and culture</li></ul><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/57a0e1c7/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Gotcha! Enshittification w/ Cory Doctorow (replay)</title>
      <itunes:episode>97</itunes:episode>
      <podcast:episode>97</podcast:episode>
      <itunes:title>Gotcha! Enshittification w/ Cory Doctorow (replay)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a54a931a-3d9d-418a-8c7c-3687a0e02783</guid>
      <link>https://share.transistor.fm/s/fdc52e68</link>
      <description>
        <![CDATA[<p>Welcome to the final boss of scams in the age of technology: Enshittification</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/nodestar-the-eternal-september-w-mike-masnick"><strong>Nodestar: The Eternal September w/ Mike Masnick</strong></a><strong><br></strong><br></p><p>We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!</p><p>Is platformisation essentially just an industrial level scam? We will deep-dive the enshittification playbook to understand how companies lock users into decaying platforms, and get away with it. Cory shares ideas on what we can do differently to turn tide. Listen to learn what a ‘chickenised reverse centaur’ is…</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Buy <a href="https://www.versobooks.com/en-gb/products/3341-enshittification?srsltid=AfmBOopKQUbNe8WPrNP7cI-JyC82xTf4w17uuP0c_ope0-hQ8vca9uHH">Enshittifcation</a> now from Verso Books!</li><li><a href="https://www.goodreads.com/book/show/211004856-picks-and-shovels">Picks and Shovels</a> by Cory Doctorow</li><li><a href="https://www.wnycstudios.org/podcasts/otm/projects/enshitification">On The Media</a> series on Enshittification</li><li><a href="https://pluralistic.net/">Pluralistic</a> — Daily Links and essays by Cory Doctorow</li><li><a href="https://pluralistic.net/2025/07/22/all-day-suckers/#i-love-the-poorly-educated">Conservatism Considered as a Movement of Bitter Rubes</a> — Cory on why conservatism creates a friendly environment for scams</li><li><a href="https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security">How I Got Scammed</a> — Cory on his personal experiences of being scammed</li><li><a href="https://craphound.com/">All of Cory’s books</a></li><li><a href="https://pluralistic.net/2025/09/02/act-locally/">All (Antitrust) Politics Are Local</a> — the entry to Pluralistic that Cory wrote on the day of recording</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**<br></strong><br>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Welcome to the final boss of scams in the age of technology: Enshittification</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/nodestar-the-eternal-september-w-mike-masnick"><strong>Nodestar: The Eternal September w/ Mike Masnick</strong></a><strong><br></strong><br></p><p>We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!</p><p>Is platformisation essentially just an industrial level scam? We will deep-dive the enshittification playbook to understand how companies lock users into decaying platforms, and get away with it. Cory shares ideas on what we can do differently to turn tide. Listen to learn what a ‘chickenised reverse centaur’ is…</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Buy <a href="https://www.versobooks.com/en-gb/products/3341-enshittification?srsltid=AfmBOopKQUbNe8WPrNP7cI-JyC82xTf4w17uuP0c_ope0-hQ8vca9uHH">Enshittifcation</a> now from Verso Books!</li><li><a href="https://www.goodreads.com/book/show/211004856-picks-and-shovels">Picks and Shovels</a> by Cory Doctorow</li><li><a href="https://www.wnycstudios.org/podcasts/otm/projects/enshitification">On The Media</a> series on Enshittification</li><li><a href="https://pluralistic.net/">Pluralistic</a> — Daily Links and essays by Cory Doctorow</li><li><a href="https://pluralistic.net/2025/07/22/all-day-suckers/#i-love-the-poorly-educated">Conservatism Considered as a Movement of Bitter Rubes</a> — Cory on why conservatism creates a friendly environment for scams</li><li><a href="https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security">How I Got Scammed</a> — Cory on his personal experiences of being scammed</li><li><a href="https://craphound.com/">All of Cory’s books</a></li><li><a href="https://pluralistic.net/2025/09/02/act-locally/">All (Antitrust) Politics Are Local</a> — the entry to Pluralistic that Cory wrote on the day of recording</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**<br></strong><br>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Tue, 06 Jan 2026 00:01:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/fdc52e68/84374615.mp3" length="81415238" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/uzkO1DhYwCyUsC7-E4gFh8hj_Ljv4hUjoZ-hlyjfJj8/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjAz/YjJlNDg3ZmExNzc1/YTFkMGViNjNlYjgw/ZDVmOS5wbmc.jpg"/>
      <itunes:duration>3390</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Welcome to the final boss of scams in the age of technology: Enshittification</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/nodestar-the-eternal-september-w-mike-masnick"><strong>Nodestar: The Eternal September w/ Mike Masnick</strong></a><strong><br></strong><br></p><p>We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!</p><p>Is platformisation essentially just an industrial level scam? We will deep-dive the enshittification playbook to understand how companies lock users into decaying platforms, and get away with it. Cory shares ideas on what we can do differently to turn tide. Listen to learn what a ‘chickenised reverse centaur’ is…</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Buy <a href="https://www.versobooks.com/en-gb/products/3341-enshittification?srsltid=AfmBOopKQUbNe8WPrNP7cI-JyC82xTf4w17uuP0c_ope0-hQ8vca9uHH">Enshittifcation</a> now from Verso Books!</li><li><a href="https://www.goodreads.com/book/show/211004856-picks-and-shovels">Picks and Shovels</a> by Cory Doctorow</li><li><a href="https://www.wnycstudios.org/podcasts/otm/projects/enshitification">On The Media</a> series on Enshittification</li><li><a href="https://pluralistic.net/">Pluralistic</a> — Daily Links and essays by Cory Doctorow</li><li><a href="https://pluralistic.net/2025/07/22/all-day-suckers/#i-love-the-poorly-educated">Conservatism Considered as a Movement of Bitter Rubes</a> — Cory on why conservatism creates a friendly environment for scams</li><li><a href="https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security">How I Got Scammed</a> — Cory on his personal experiences of being scammed</li><li><a href="https://craphound.com/">All of Cory’s books</a></li><li><a href="https://pluralistic.net/2025/09/02/act-locally/">All (Antitrust) Politics Are Local</a> — the entry to Pluralistic that Cory wrote on the day of recording</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**<br></strong><br>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/fdc52e68/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Worker Power &amp; Big Tech Bossmen w/ David Seligman (replay)</title>
      <itunes:episode>96</itunes:episode>
      <podcast:episode>96</podcast:episode>
      <itunes:title>Worker Power &amp; Big Tech Bossmen w/ David Seligman (replay)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b0bfcb28-ca78-41fc-99e9-608e80f054a6</guid>
      <link>https://share.transistor.fm/s/7056b19a</link>
      <description>
        <![CDATA[<p>Litigator David Seligman describes how big tech companies act brazenly as legal bullies to extract wealth and power from the working class in the US.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/the-human-in-the-loop"><strong>The Human in the Loop: The AI Supply Chain</strong></a><strong><br></strong><br></p><p>We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!</p><p>Alix and David talk about legal devices such as forced arbitration and monopolistic practices like algorithmic price fixing and wage suppression — and the cases that David’s team are bringing to fight these practices</p><p><strong>Further reading &amp; resources</strong></p><ul><li><a href="https://www.seligmanforag.com/campaign">Seligman for Attorney General Colorado</a></li><li><a href="https://towardsjustice.org/2022/06/21/press-release-drivers-sue-to-block-uber-lyfts-illegal-price-fixing/">Towards Justice California drivers lawsuit</a></li><li><a href="https://en.wikipedia.org/wiki/Eichmann_in_Jerusalem">Eichman in Jerusalem: A Report on the Banal State of Evil</a> by Hannah Arendt</li><li><a href="https://global.oup.com/academic/product/the-dual-state-9780198716204?cc=gb&amp;lang=en&amp;">The Dual State</a> by Ernst Fraenkel</li><li><a href="https://towardsjustice.org/wp-content/uploads/2025/02/Real-Surveillance-Prices-and-Wages-Report.pdf">Prohibiting Surveillance Prices and Wages</a> by Towards Justice</li><li><a href="https://www.classaction.org/media/gill-et-al-v-uber-technologies-inc-et-al.pdf">Gill VS Uber</a> — class action led by Towards Justice</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Litigator David Seligman describes how big tech companies act brazenly as legal bullies to extract wealth and power from the working class in the US.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/the-human-in-the-loop"><strong>The Human in the Loop: The AI Supply Chain</strong></a><strong><br></strong><br></p><p>We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!</p><p>Alix and David talk about legal devices such as forced arbitration and monopolistic practices like algorithmic price fixing and wage suppression — and the cases that David’s team are bringing to fight these practices</p><p><strong>Further reading &amp; resources</strong></p><ul><li><a href="https://www.seligmanforag.com/campaign">Seligman for Attorney General Colorado</a></li><li><a href="https://towardsjustice.org/2022/06/21/press-release-drivers-sue-to-block-uber-lyfts-illegal-price-fixing/">Towards Justice California drivers lawsuit</a></li><li><a href="https://en.wikipedia.org/wiki/Eichmann_in_Jerusalem">Eichman in Jerusalem: A Report on the Banal State of Evil</a> by Hannah Arendt</li><li><a href="https://global.oup.com/academic/product/the-dual-state-9780198716204?cc=gb&amp;lang=en&amp;">The Dual State</a> by Ernst Fraenkel</li><li><a href="https://towardsjustice.org/wp-content/uploads/2025/02/Real-Surveillance-Prices-and-Wages-Report.pdf">Prohibiting Surveillance Prices and Wages</a> by Towards Justice</li><li><a href="https://www.classaction.org/media/gill-et-al-v-uber-technologies-inc-et-al.pdf">Gill VS Uber</a> — class action led by Towards Justice</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 02 Jan 2026 00:01:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/7056b19a/7e0f3f34.mp3" length="67342739" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/apZZSdiQXh5jdYVm8yLrYYaLqcUcfvmg2LcReLJPH34/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lYWI0/OGM2YWEzNjliNmU3/NDllYWQ2YjdmZWJk/YWIxYy5wbmc.jpg"/>
      <itunes:duration>2804</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Litigator David Seligman describes how big tech companies act brazenly as legal bullies to extract wealth and power from the working class in the US.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/the-human-in-the-loop"><strong>The Human in the Loop: The AI Supply Chain</strong></a><strong><br></strong><br></p><p>We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!</p><p>Alix and David talk about legal devices such as forced arbitration and monopolistic practices like algorithmic price fixing and wage suppression — and the cases that David’s team are bringing to fight these practices</p><p><strong>Further reading &amp; resources</strong></p><ul><li><a href="https://www.seligmanforag.com/campaign">Seligman for Attorney General Colorado</a></li><li><a href="https://towardsjustice.org/2022/06/21/press-release-drivers-sue-to-block-uber-lyfts-illegal-price-fixing/">Towards Justice California drivers lawsuit</a></li><li><a href="https://en.wikipedia.org/wiki/Eichmann_in_Jerusalem">Eichman in Jerusalem: A Report on the Banal State of Evil</a> by Hannah Arendt</li><li><a href="https://global.oup.com/academic/product/the-dual-state-9780198716204?cc=gb&amp;lang=en&amp;">The Dual State</a> by Ernst Fraenkel</li><li><a href="https://towardsjustice.org/wp-content/uploads/2025/02/Real-Surveillance-Prices-and-Wages-Report.pdf">Prohibiting Surveillance Prices and Wages</a> by Towards Justice</li><li><a href="https://www.classaction.org/media/gill-et-al-v-uber-technologies-inc-et-al.pdf">Gill VS Uber</a> — class action led by Towards Justice</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/7056b19a/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Reporting on AI’s climate injustices w/ Karen Hao (replay)</title>
      <itunes:episode>95</itunes:episode>
      <podcast:episode>95</podcast:episode>
      <itunes:title>Reporting on AI’s climate injustices w/ Karen Hao (replay)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ba673b77-9735-4947-8c3c-21f789426c4d</guid>
      <link>https://share.transistor.fm/s/b4a545d1</link>
      <description>
        <![CDATA[<p>Reporting on the tech industry proves a huge challenge due to how opaque it all is — Empire of AI author Karen Hao talks us through her investigative methods in a conversation from November 2024.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/net-0-ai-thirst-in-a-water-scarce-world-w-julie-mccarthy"><strong>Net 0++ AI Thirst in a Water-Scarce World w/ Julie McCarthy</strong></a><strong><br></strong><br></p><p>We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!</p><p>AI companies are flagrantly obstructive when it comes to sharing information about their infrastructure — this makes reporting on the climate injustices of AI really hard. Karen shares the tactics that these companies use, and the challenges that she has faced in her investigative reporting.</p><p>Further reading:</p><ul><li>Buy <a href="https://www.penguin.co.uk/books/460331/empire-of-ai-by-hao-karen/9780241678923">Empire of AI</a> by Karen Hao</li><li><a href="https://www.theatlantic.com/technology/archive/2024/09/microsoft-ai-oil-contracts/679804/">Microsoft’s Hypocrisy on AI</a> by Karen Hao</li><li><a href="https://www.theatlantic.com/technology/archive/2024/03/ai-water-climate-microsoft/677602/">AI is Taking Water from the Desert</a> by Karen Hao</li></ul><p><br>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Reporting on the tech industry proves a huge challenge due to how opaque it all is — Empire of AI author Karen Hao talks us through her investigative methods in a conversation from November 2024.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/net-0-ai-thirst-in-a-water-scarce-world-w-julie-mccarthy"><strong>Net 0++ AI Thirst in a Water-Scarce World w/ Julie McCarthy</strong></a><strong><br></strong><br></p><p>We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!</p><p>AI companies are flagrantly obstructive when it comes to sharing information about their infrastructure — this makes reporting on the climate injustices of AI really hard. Karen shares the tactics that these companies use, and the challenges that she has faced in her investigative reporting.</p><p>Further reading:</p><ul><li>Buy <a href="https://www.penguin.co.uk/books/460331/empire-of-ai-by-hao-karen/9780241678923">Empire of AI</a> by Karen Hao</li><li><a href="https://www.theatlantic.com/technology/archive/2024/09/microsoft-ai-oil-contracts/679804/">Microsoft’s Hypocrisy on AI</a> by Karen Hao</li><li><a href="https://www.theatlantic.com/technology/archive/2024/03/ai-water-climate-microsoft/677602/">AI is Taking Water from the Desert</a> by Karen Hao</li></ul><p><br>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Tue, 30 Dec 2025 00:01:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/b4a545d1/ab233d32.mp3" length="39624206" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/_TBwNU7TwFFz6kOzYgPP7ew3OpFct4jxnRmGsEXYKUg/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84Njk2/NmFlZWE5YWMxMjY1/YmU1ODU1NzcxODcx/MTQ0Ny5wbmc.jpg"/>
      <itunes:duration>1649</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Reporting on the tech industry proves a huge challenge due to how opaque it all is — Empire of AI author Karen Hao talks us through her investigative methods in a conversation from November 2024.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/net-0-ai-thirst-in-a-water-scarce-world-w-julie-mccarthy"><strong>Net 0++ AI Thirst in a Water-Scarce World w/ Julie McCarthy</strong></a><strong><br></strong><br></p><p>We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!</p><p>AI companies are flagrantly obstructive when it comes to sharing information about their infrastructure — this makes reporting on the climate injustices of AI really hard. Karen shares the tactics that these companies use, and the challenges that she has faced in her investigative reporting.</p><p>Further reading:</p><ul><li>Buy <a href="https://www.penguin.co.uk/books/460331/empire-of-ai-by-hao-karen/9780241678923">Empire of AI</a> by Karen Hao</li><li><a href="https://www.theatlantic.com/technology/archive/2024/09/microsoft-ai-oil-contracts/679804/">Microsoft’s Hypocrisy on AI</a> by Karen Hao</li><li><a href="https://www.theatlantic.com/technology/archive/2024/03/ai-water-climate-microsoft/677602/">AI is Taking Water from the Desert</a> by Karen Hao</li></ul><p><br>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b4a545d1/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>How to (Actually) Keep Kids Safe Online w/ Kate Sim (replay)</title>
      <itunes:episode>94</itunes:episode>
      <podcast:episode>94</podcast:episode>
      <itunes:title>How to (Actually) Keep Kids Safe Online w/ Kate Sim (replay)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">15e728db-6e01-4d40-8ffd-2d5c48fd3c60</guid>
      <link>https://share.transistor.fm/s/f67c02fe</link>
      <description>
        <![CDATA[<p>A replay of our conversation with Kate Sim, on the state of child safety online.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/dogwhistles-networked-transphobia-online"><strong>Dogwhistles: Networked Transphobia Online</strong></a><strong><br></strong><br></p><p>We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!</p><p>Child safety is a fuzzy catch-all concept for our broader social anxieties that seems to be everywhere in our conversations about the internet. But child safety isn’t a new concept, and the way our politics focuses on the spectacle isn’t new either.</p><p>To help us unpack this is Kate Sim, who has over a decade of experience in sexual violence prevention and response and is currently the Director of the Children’s Online Safety and Privacy Research (COSPR) program at the University of Western Australia’s Tech &amp; Policy Lab. We discuss the growth of ‘child safety’ regulation around the world, and how it often conflates multiple topics: age-gating adult content, explicit attempts to harm children, national security, and even ‘family values’.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>On COSPRs forthcoming paper on the CSAM detection ecosystem. Here is a fact sheet with ecosystem map based on it: <a href="https://bit.ly/cospr-collateral">https://bit.ly/cospr-collateral</a></li><li>On CSAM bottleneck problem: <a href="https://doi.org/10.25740/pr592kc5483">https://doi.org/10.25740/pr592kc5483</a></li><li>IBCK episode on the Anxious Generation: <a href="https://pod.link/1651876897/episode/47a8aa95c83be96b044dcb3f4e43d158">https://pod.link/1651876897/episode/47a8aa95c83be96b044dcb3f4e43d158</a></li><li>Child psychology expert Candace Odgers debunking Jonathan Haidt’s claims in real-time here: <a href="https://tyde.virginia.edu/event/haidt-odgers/">https://tyde.virginia.edu/event/haidt-odgers/</a>)</li><li>A primer on client-side scanning and CSAM from Mitali Thakor: <a href="https://mit-serc.pubpub.org/pub/701yvdbh/release/2">https://mit-serc.pubpub.org/pub/701yvdbh/release/2</a></li><li>On effective CSA prevention and scalability: <a href="https://www.prevention.global/resources/read-full-scalability-report">https://www.prevention.global/resources/read-full-scalability-report</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**<br></strong><br>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>A replay of our conversation with Kate Sim, on the state of child safety online.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/dogwhistles-networked-transphobia-online"><strong>Dogwhistles: Networked Transphobia Online</strong></a><strong><br></strong><br></p><p>We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!</p><p>Child safety is a fuzzy catch-all concept for our broader social anxieties that seems to be everywhere in our conversations about the internet. But child safety isn’t a new concept, and the way our politics focuses on the spectacle isn’t new either.</p><p>To help us unpack this is Kate Sim, who has over a decade of experience in sexual violence prevention and response and is currently the Director of the Children’s Online Safety and Privacy Research (COSPR) program at the University of Western Australia’s Tech &amp; Policy Lab. We discuss the growth of ‘child safety’ regulation around the world, and how it often conflates multiple topics: age-gating adult content, explicit attempts to harm children, national security, and even ‘family values’.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>On COSPRs forthcoming paper on the CSAM detection ecosystem. Here is a fact sheet with ecosystem map based on it: <a href="https://bit.ly/cospr-collateral">https://bit.ly/cospr-collateral</a></li><li>On CSAM bottleneck problem: <a href="https://doi.org/10.25740/pr592kc5483">https://doi.org/10.25740/pr592kc5483</a></li><li>IBCK episode on the Anxious Generation: <a href="https://pod.link/1651876897/episode/47a8aa95c83be96b044dcb3f4e43d158">https://pod.link/1651876897/episode/47a8aa95c83be96b044dcb3f4e43d158</a></li><li>Child psychology expert Candace Odgers debunking Jonathan Haidt’s claims in real-time here: <a href="https://tyde.virginia.edu/event/haidt-odgers/">https://tyde.virginia.edu/event/haidt-odgers/</a>)</li><li>A primer on client-side scanning and CSAM from Mitali Thakor: <a href="https://mit-serc.pubpub.org/pub/701yvdbh/release/2">https://mit-serc.pubpub.org/pub/701yvdbh/release/2</a></li><li>On effective CSA prevention and scalability: <a href="https://www.prevention.global/resources/read-full-scalability-report">https://www.prevention.global/resources/read-full-scalability-report</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**<br></strong><br>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 26 Dec 2025 00:01:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/f67c02fe/fc7a55ba.mp3" length="71395477" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/b67iQ_hLsgC11yhy1aDlVg1VyKSpz3cWn9TeRc5r5-o/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84MmJm/YzA3MGZiYmFhNTM0/YjU3NTZmNjkxODAx/MzMyMi5wbmc.jpg"/>
      <itunes:duration>2973</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>A replay of our conversation with Kate Sim, on the state of child safety online.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/dogwhistles-networked-transphobia-online"><strong>Dogwhistles: Networked Transphobia Online</strong></a><strong><br></strong><br></p><p>We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!</p><p>Child safety is a fuzzy catch-all concept for our broader social anxieties that seems to be everywhere in our conversations about the internet. But child safety isn’t a new concept, and the way our politics focuses on the spectacle isn’t new either.</p><p>To help us unpack this is Kate Sim, who has over a decade of experience in sexual violence prevention and response and is currently the Director of the Children’s Online Safety and Privacy Research (COSPR) program at the University of Western Australia’s Tech &amp; Policy Lab. We discuss the growth of ‘child safety’ regulation around the world, and how it often conflates multiple topics: age-gating adult content, explicit attempts to harm children, national security, and even ‘family values’.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>On COSPRs forthcoming paper on the CSAM detection ecosystem. Here is a fact sheet with ecosystem map based on it: <a href="https://bit.ly/cospr-collateral">https://bit.ly/cospr-collateral</a></li><li>On CSAM bottleneck problem: <a href="https://doi.org/10.25740/pr592kc5483">https://doi.org/10.25740/pr592kc5483</a></li><li>IBCK episode on the Anxious Generation: <a href="https://pod.link/1651876897/episode/47a8aa95c83be96b044dcb3f4e43d158">https://pod.link/1651876897/episode/47a8aa95c83be96b044dcb3f4e43d158</a></li><li>Child psychology expert Candace Odgers debunking Jonathan Haidt’s claims in real-time here: <a href="https://tyde.virginia.edu/event/haidt-odgers/">https://tyde.virginia.edu/event/haidt-odgers/</a>)</li><li>A primer on client-side scanning and CSAM from Mitali Thakor: <a href="https://mit-serc.pubpub.org/pub/701yvdbh/release/2">https://mit-serc.pubpub.org/pub/701yvdbh/release/2</a></li><li>On effective CSA prevention and scalability: <a href="https://www.prevention.global/resources/read-full-scalability-report">https://www.prevention.global/resources/read-full-scalability-report</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**<br></strong><br>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/f67c02fe/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Digitisation, Privatisation, and Human Centipedes: Our Learnings from 2025</title>
      <itunes:episode>93</itunes:episode>
      <podcast:episode>93</podcast:episode>
      <itunes:title>Digitisation, Privatisation, and Human Centipedes: Our Learnings from 2025</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c886072b-9ce0-41a8-bc56-6f8e7c458c0c</guid>
      <link>https://share.transistor.fm/s/5e368acb</link>
      <description>
        <![CDATA[<p>Before we break for the year we wanted to reflect on what the podcast brought us in 2025, and what we want to see for 2026</p><p>This week Alix is joined by two members of The Maybe team: Prathm Juneja and Georgia Iacovou. We discuss our favourite episodes from the year while making it clear we love all episodes equally. And also this is not your standard clip show. We ask ourselves what we learned, why it was important to us, and what we are hungry for in 2026.</p><p><strong>Featured episodes:</strong></p><ul><li><a href="https://themaybe.org/podcast/is-digitisation-killing-democracy-w-marietje-schaake">Is Digitisation Killing Democracy? w/ Marietje Schaake</a></li><li><a href="https://themaybe.org/podcast/nodestar-building-blacksky-w-rudy-fraser">Nodestar: Building Blacksky w/ Rudy Fraser</a></li><li><a href="https://www.themaybe.org/podcast/regulating-privacy-in-an-ai-era-w-carly-kind">Regulating Privacy in an AI Era w/ Carly Kind</a></li><li><a href="https://www.themaybe.org/podcast/to-be-seen-and-not-watched-w-tawana-petty">To be Seen and Not Watched w/ Tawana Petty</a></li><li><a href="https://saysmaybe.com/podcast/the-taiwan-bottleneck-w-brian-chen">The Taiwan Bottleneck w/ Brian Chen</a></li><li><a href="https://themaybe.org/podcast/gotcha-how-mlms-ate-the-economy-w-bridget-read">Gotcha! How MLMs Ate the Economy w/ Bridget Read</a></li><li>Also mentioned in this episode: <a href="https://youtu.be/l0Vrko6bna0?si=1qbqZs9vaCGLiBkJ">AI in Gaza Live from Mexico City</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Before we break for the year we wanted to reflect on what the podcast brought us in 2025, and what we want to see for 2026</p><p>This week Alix is joined by two members of The Maybe team: Prathm Juneja and Georgia Iacovou. We discuss our favourite episodes from the year while making it clear we love all episodes equally. And also this is not your standard clip show. We ask ourselves what we learned, why it was important to us, and what we are hungry for in 2026.</p><p><strong>Featured episodes:</strong></p><ul><li><a href="https://themaybe.org/podcast/is-digitisation-killing-democracy-w-marietje-schaake">Is Digitisation Killing Democracy? w/ Marietje Schaake</a></li><li><a href="https://themaybe.org/podcast/nodestar-building-blacksky-w-rudy-fraser">Nodestar: Building Blacksky w/ Rudy Fraser</a></li><li><a href="https://www.themaybe.org/podcast/regulating-privacy-in-an-ai-era-w-carly-kind">Regulating Privacy in an AI Era w/ Carly Kind</a></li><li><a href="https://www.themaybe.org/podcast/to-be-seen-and-not-watched-w-tawana-petty">To be Seen and Not Watched w/ Tawana Petty</a></li><li><a href="https://saysmaybe.com/podcast/the-taiwan-bottleneck-w-brian-chen">The Taiwan Bottleneck w/ Brian Chen</a></li><li><a href="https://themaybe.org/podcast/gotcha-how-mlms-ate-the-economy-w-bridget-read">Gotcha! How MLMs Ate the Economy w/ Bridget Read</a></li><li>Also mentioned in this episode: <a href="https://youtu.be/l0Vrko6bna0?si=1qbqZs9vaCGLiBkJ">AI in Gaza Live from Mexico City</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 19 Dec 2025 00:01:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/5e368acb/31ab37d5.mp3" length="76227318" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/SXPF_8DtyTjkgAtGrVVbNYtxA_TaJfLQhatBVMetjXM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hNjNm/YWFkNTQwZDVjZDRi/ODlhMWZhZDRmNGZi/MDA4My5wbmc.jpg"/>
      <itunes:duration>3174</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Before we break for the year we wanted to reflect on what the podcast brought us in 2025, and what we want to see for 2026</p><p>This week Alix is joined by two members of The Maybe team: Prathm Juneja and Georgia Iacovou. We discuss our favourite episodes from the year while making it clear we love all episodes equally. And also this is not your standard clip show. We ask ourselves what we learned, why it was important to us, and what we are hungry for in 2026.</p><p><strong>Featured episodes:</strong></p><ul><li><a href="https://themaybe.org/podcast/is-digitisation-killing-democracy-w-marietje-schaake">Is Digitisation Killing Democracy? w/ Marietje Schaake</a></li><li><a href="https://themaybe.org/podcast/nodestar-building-blacksky-w-rudy-fraser">Nodestar: Building Blacksky w/ Rudy Fraser</a></li><li><a href="https://www.themaybe.org/podcast/regulating-privacy-in-an-ai-era-w-carly-kind">Regulating Privacy in an AI Era w/ Carly Kind</a></li><li><a href="https://www.themaybe.org/podcast/to-be-seen-and-not-watched-w-tawana-petty">To be Seen and Not Watched w/ Tawana Petty</a></li><li><a href="https://saysmaybe.com/podcast/the-taiwan-bottleneck-w-brian-chen">The Taiwan Bottleneck w/ Brian Chen</a></li><li><a href="https://themaybe.org/podcast/gotcha-how-mlms-ate-the-economy-w-bridget-read">Gotcha! How MLMs Ate the Economy w/ Bridget Read</a></li><li>Also mentioned in this episode: <a href="https://youtu.be/l0Vrko6bna0?si=1qbqZs9vaCGLiBkJ">AI in Gaza Live from Mexico City</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/5e368acb/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Ben Collins: Computer Says MozFest </title>
      <itunes:episode>92</itunes:episode>
      <podcast:episode>92</podcast:episode>
      <itunes:title>Ben Collins: Computer Says MozFest </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">51c2d292-2e57-47bf-acdb-8815cd6c9785</guid>
      <link>https://share.transistor.fm/s/c8256b73</link>
      <description>
        <![CDATA[<p>The Onion CEO Ben Collins has successfully turned political satire into a sustainable business. He explains why humorous messaging is important to understand times like these — and why he’s dead serious about buying Infowars.</p><p><strong>Head to our feed for more conversations from MozFest with Abeba Birhane, Audrey Tang, and Luisa Franco Machado.<br></strong><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Read <a href="https://theonion.com/">The Onion</a>, America’s finest news source, if you don’t already…</li><li><a href="https://theonion.com/heres-why-i-decided-to-buy-infowars/">The Onion to buy Infowars</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>The Onion CEO Ben Collins has successfully turned political satire into a sustainable business. He explains why humorous messaging is important to understand times like these — and why he’s dead serious about buying Infowars.</p><p><strong>Head to our feed for more conversations from MozFest with Abeba Birhane, Audrey Tang, and Luisa Franco Machado.<br></strong><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Read <a href="https://theonion.com/">The Onion</a>, America’s finest news source, if you don’t already…</li><li><a href="https://theonion.com/heres-why-i-decided-to-buy-infowars/">The Onion to buy Infowars</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Thu, 18 Dec 2025 00:01:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/c8256b73/f3d1225c.mp3" length="15586376" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/2M5z1Q2Ks7T281-fhrMpOa_fvlYYI8azzCTkQCghOno/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNzI3/ZGZiM2UxYjlkZWVh/ZDNiYWUwMTBmMzI2/ZDk5OC5wbmc.jpg"/>
      <itunes:duration>647</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>The Onion CEO Ben Collins has successfully turned political satire into a sustainable business. He explains why humorous messaging is important to understand times like these — and why he’s dead serious about buying Infowars.</p><p><strong>Head to our feed for more conversations from MozFest with Abeba Birhane, Audrey Tang, and Luisa Franco Machado.<br></strong><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Read <a href="https://theonion.com/">The Onion</a>, America’s finest news source, if you don’t already…</li><li><a href="https://theonion.com/heres-why-i-decided-to-buy-infowars/">The Onion to buy Infowars</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/c8256b73/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Audrey Tang: Computer Says MozFest</title>
      <itunes:episode>91</itunes:episode>
      <podcast:episode>91</podcast:episode>
      <itunes:title>Audrey Tang: Computer Says MozFest</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9334f748-6536-459e-ac6f-6450e132acfc</guid>
      <link>https://share.transistor.fm/s/b4827447</link>
      <description>
        <![CDATA[<p>Audrey Tang has some big ideas on how we can use collective needs to shape AI systems — and avoid a future where human life is seen as an obstacle to paper clip production. She also shares what might be the first actual good use-case for AI agents…</p><p><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://6pack.care/">6-Pack of Care</a> — a research project by Audrey Tang and Caroline Green as part of the Institute for Ethics in AI</li><li>More about <a href="https://www.kanpai-japan.com/religion-and-spirituality-in-japan/the-main-shinto-gods">Kami</a> — the Japanese local spirits Audrey mentions throughout the conversation</li><li>The Oxford <a href="https://www.oxford-aiethics.ox.ac.uk/">Institute for Ethics in AI</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Audrey Tang has some big ideas on how we can use collective needs to shape AI systems — and avoid a future where human life is seen as an obstacle to paper clip production. She also shares what might be the first actual good use-case for AI agents…</p><p><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://6pack.care/">6-Pack of Care</a> — a research project by Audrey Tang and Caroline Green as part of the Institute for Ethics in AI</li><li>More about <a href="https://www.kanpai-japan.com/religion-and-spirituality-in-japan/the-main-shinto-gods">Kami</a> — the Japanese local spirits Audrey mentions throughout the conversation</li><li>The Oxford <a href="https://www.oxford-aiethics.ox.ac.uk/">Institute for Ethics in AI</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Wed, 17 Dec 2025 00:01:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/b4827447/6d7475a8.mp3" length="33520386" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/KYac0b-oqHowzZDA3sqdUyKydVMjbELsXjhm2l9KFHY/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hMTYx/NzA1NTBmZTYyMTVm/MmY5MmFjMzU3NTM3/MjQ0Yy5wbmc.jpg"/>
      <itunes:duration>1394</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Audrey Tang has some big ideas on how we can use collective needs to shape AI systems — and avoid a future where human life is seen as an obstacle to paper clip production. She also shares what might be the first actual good use-case for AI agents…</p><p><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://6pack.care/">6-Pack of Care</a> — a research project by Audrey Tang and Caroline Green as part of the Institute for Ethics in AI</li><li>More about <a href="https://www.kanpai-japan.com/religion-and-spirituality-in-japan/the-main-shinto-gods">Kami</a> — the Japanese local spirits Audrey mentions throughout the conversation</li><li>The Oxford <a href="https://www.oxford-aiethics.ox.ac.uk/">Institute for Ethics in AI</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b4827447/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Luisa Franco Machado: Computer Says MozFest</title>
      <itunes:episode>90</itunes:episode>
      <podcast:episode>90</podcast:episode>
      <itunes:title>Luisa Franco Machado: Computer Says MozFest</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d293d9f3-3a73-4554-89b7-e5bcc477a890</guid>
      <link>https://share.transistor.fm/s/ee201b5d</link>
      <description>
        <![CDATA[<p>You can’t build a digital rights movement if you don’t know what you’re fighting for. Luisa says that we’re in a crisis of imagination, and that participation — the non-performative kind — is one big way out of this.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Learn more about <a href="https://equilabs.org/">Equilabs</a></li><li><a href="https://www.instagram.com/luhfm/">Follow Luisa on Instagram</a> — sorry, email is too ‘analog’</li><li>Check out her <a href="https://linktr.ee/luisa_fmachado?utm_source=ig&amp;utm_medium=social&amp;utm_content=link_in_bio">Linktree</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>You can’t build a digital rights movement if you don’t know what you’re fighting for. Luisa says that we’re in a crisis of imagination, and that participation — the non-performative kind — is one big way out of this.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Learn more about <a href="https://equilabs.org/">Equilabs</a></li><li><a href="https://www.instagram.com/luhfm/">Follow Luisa on Instagram</a> — sorry, email is too ‘analog’</li><li>Check out her <a href="https://linktr.ee/luisa_fmachado?utm_source=ig&amp;utm_medium=social&amp;utm_content=link_in_bio">Linktree</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Tue, 16 Dec 2025 00:01:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/ee201b5d/e6a5a85d.mp3" length="30896659" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/4O6XlsNj7TkxHDGq3jhFgUhJj7IEWh3BNex1-MvQ07w/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mMDAy/ZGRmZTQ4ZjZjNWU5/YmExOWMyOTZhNWFk/OGI5YS5wbmc.jpg"/>
      <itunes:duration>1285</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>You can’t build a digital rights movement if you don’t know what you’re fighting for. Luisa says that we’re in a crisis of imagination, and that participation — the non-performative kind — is one big way out of this.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Learn more about <a href="https://equilabs.org/">Equilabs</a></li><li><a href="https://www.instagram.com/luhfm/">Follow Luisa on Instagram</a> — sorry, email is too ‘analog’</li><li>Check out her <a href="https://linktr.ee/luisa_fmachado?utm_source=ig&amp;utm_medium=social&amp;utm_content=link_in_bio">Linktree</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/ee201b5d/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Abeba Birhane: Computer Says MozFest</title>
      <itunes:episode>89</itunes:episode>
      <podcast:episode>89</podcast:episode>
      <itunes:title>Abeba Birhane: Computer Says MozFest</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4b4b6da1-d3f3-4b73-970a-9261f56abcd6</guid>
      <link>https://share.transistor.fm/s/100ba3e4</link>
      <description>
        <![CDATA[<p>Earlier this year Abeba Birhane was asked to give a keynote at the AI for Good Summit for the UN — and at the eleventh hour they attempted to censor any mention of genocide in Palestine, and their Big Tech sponsors. She was invited to give her full uncensored talk at Mozfest.</p><p><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://aial.ie/blog/2025-ai-for-good-summit/">Abeba’s blog post on the UN censoring her talk on AI</a></li><li><a href="https://abebabirhane.com/">More about Abeba</a></li><li>More about <a href="https://aial.ie/">The AI Accountability Lab</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Earlier this year Abeba Birhane was asked to give a keynote at the AI for Good Summit for the UN — and at the eleventh hour they attempted to censor any mention of genocide in Palestine, and their Big Tech sponsors. She was invited to give her full uncensored talk at Mozfest.</p><p><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://aial.ie/blog/2025-ai-for-good-summit/">Abeba’s blog post on the UN censoring her talk on AI</a></li><li><a href="https://abebabirhane.com/">More about Abeba</a></li><li>More about <a href="https://aial.ie/">The AI Accountability Lab</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 15 Dec 2025 00:01:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/100ba3e4/c68167aa.mp3" length="25558963" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/7pd-OHB5kK2vztETvO6Nsh2hjB49l7_RJd-OcC9U_GI/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84ZjIw/YWU4Y2ViYzgxNzk2/MmU5MzQyOTI5N2M1/YzBhZi5wbmc.jpg"/>
      <itunes:duration>1062</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Earlier this year Abeba Birhane was asked to give a keynote at the AI for Good Summit for the UN — and at the eleventh hour they attempted to censor any mention of genocide in Palestine, and their Big Tech sponsors. She was invited to give her full uncensored talk at Mozfest.</p><p><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://aial.ie/blog/2025-ai-for-good-summit/">Abeba’s blog post on the UN censoring her talk on AI</a></li><li><a href="https://abebabirhane.com/">More about Abeba</a></li><li>More about <a href="https://aial.ie/">The AI Accountability Lab</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a> | Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/100ba3e4/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Computer Says MozFest 2025</title>
      <itunes:episode>88</itunes:episode>
      <podcast:episode>88</podcast:episode>
      <itunes:title>Computer Says MozFest 2025</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6eb8ca2d-96e0-4c71-a663-f47faeb25d01</guid>
      <link>https://share.transistor.fm/s/922463d3</link>
      <description>
        <![CDATA[<p>Mozilla Festival 2025. Barcelona. Three days in a bonanza of interesting people, ideas, and technology politics. These were our highlights!</p><p><strong>More like this: FAccT 2025 episodes </strong><a href="https://www.themaybe.org/podcast/after-the-facct-materiality-and-militarisation"><strong>one</strong></a><strong> and </strong><a href="https://www.themaybe.org/podcast/after-the-facct-labour-and-misrepresentation"><strong>two</strong></a><strong><br></strong><br></p><p>This is an extra special episode packed full of conversations and on-site impressions of the biggest Mozfest we’ve had in years. This year Alix moderated three panels, ran an AMA, and even hosted a game show — and somehow also had time to record all of this, for your pleasure.</p><p><strong>Included in this episode is:</strong></p><ul><li>A preview of Exposing and Reshaping the Global Footprint of Data Centers, with independent journalist <a href="https://pulitzercenter.org/people/pablo-jimenez-arandia">Pablo Jiménez Arandia</a>, <a href="https://www.lighthousereports.com/team/tessa-pang/">Tessa Pang</a> (impact editor for Lighthouse Reports), and <a href="https://www.mozillafoundation.org/en/what-we-do/grantmaking/fellowship/paz-pena/">Paz Peña</a> (Mozilla Fellow, and founder of the <a href="https://terraforminglatam.net/">Latin American Institute of Terraforming</a>.)</li><li>A conversation with <a href="https://www.stopspying.org/hana-memon-bio">Hana Memon</a>, developer at <a href="https://genzforchange.org/">Gen Z for Change</a></li><li>A conversation with creative technologist Malik Afegbua, on his project <a href="https://elderseries.carrd.co/">The Elder Series</a></li><li><a href="https://www.mozillafoundation.org/en/meet-mozilla/nabiha-syed/">Nabiha Syed</a> and <a href="https://helenkingturvey.org/">Helen Turvey</a> will also reflect on how this Mozfest went, and what they hope to see for the future of the festival in the coming years</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Post Production by </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong> | Pre Production by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Mozilla Festival 2025. Barcelona. Three days in a bonanza of interesting people, ideas, and technology politics. These were our highlights!</p><p><strong>More like this: FAccT 2025 episodes </strong><a href="https://www.themaybe.org/podcast/after-the-facct-materiality-and-militarisation"><strong>one</strong></a><strong> and </strong><a href="https://www.themaybe.org/podcast/after-the-facct-labour-and-misrepresentation"><strong>two</strong></a><strong><br></strong><br></p><p>This is an extra special episode packed full of conversations and on-site impressions of the biggest Mozfest we’ve had in years. This year Alix moderated three panels, ran an AMA, and even hosted a game show — and somehow also had time to record all of this, for your pleasure.</p><p><strong>Included in this episode is:</strong></p><ul><li>A preview of Exposing and Reshaping the Global Footprint of Data Centers, with independent journalist <a href="https://pulitzercenter.org/people/pablo-jimenez-arandia">Pablo Jiménez Arandia</a>, <a href="https://www.lighthousereports.com/team/tessa-pang/">Tessa Pang</a> (impact editor for Lighthouse Reports), and <a href="https://www.mozillafoundation.org/en/what-we-do/grantmaking/fellowship/paz-pena/">Paz Peña</a> (Mozilla Fellow, and founder of the <a href="https://terraforminglatam.net/">Latin American Institute of Terraforming</a>.)</li><li>A conversation with <a href="https://www.stopspying.org/hana-memon-bio">Hana Memon</a>, developer at <a href="https://genzforchange.org/">Gen Z for Change</a></li><li>A conversation with creative technologist Malik Afegbua, on his project <a href="https://elderseries.carrd.co/">The Elder Series</a></li><li><a href="https://www.mozillafoundation.org/en/meet-mozilla/nabiha-syed/">Nabiha Syed</a> and <a href="https://helenkingturvey.org/">Helen Turvey</a> will also reflect on how this Mozfest went, and what they hope to see for the future of the festival in the coming years</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Post Production by </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong> | Pre Production by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a></p>]]>
      </content:encoded>
      <pubDate>Fri, 12 Dec 2025 00:01:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/922463d3/5a4565f1.mp3" length="77428829" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/8-T9OJxe4U-4R7dFh82bsmP1I-VSGHLl5xN8UdJwQLc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81NmVj/ZTRjZWE4OGEwZmRl/YzlmNjFhNTU1ZGU3/NWUzNy5wbmc.jpg"/>
      <itunes:duration>3224</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Mozilla Festival 2025. Barcelona. Three days in a bonanza of interesting people, ideas, and technology politics. These were our highlights!</p><p><strong>More like this: FAccT 2025 episodes </strong><a href="https://www.themaybe.org/podcast/after-the-facct-materiality-and-militarisation"><strong>one</strong></a><strong> and </strong><a href="https://www.themaybe.org/podcast/after-the-facct-labour-and-misrepresentation"><strong>two</strong></a><strong><br></strong><br></p><p>This is an extra special episode packed full of conversations and on-site impressions of the biggest Mozfest we’ve had in years. This year Alix moderated three panels, ran an AMA, and even hosted a game show — and somehow also had time to record all of this, for your pleasure.</p><p><strong>Included in this episode is:</strong></p><ul><li>A preview of Exposing and Reshaping the Global Footprint of Data Centers, with independent journalist <a href="https://pulitzercenter.org/people/pablo-jimenez-arandia">Pablo Jiménez Arandia</a>, <a href="https://www.lighthousereports.com/team/tessa-pang/">Tessa Pang</a> (impact editor for Lighthouse Reports), and <a href="https://www.mozillafoundation.org/en/what-we-do/grantmaking/fellowship/paz-pena/">Paz Peña</a> (Mozilla Fellow, and founder of the <a href="https://terraforminglatam.net/">Latin American Institute of Terraforming</a>.)</li><li>A conversation with <a href="https://www.stopspying.org/hana-memon-bio">Hana Memon</a>, developer at <a href="https://genzforchange.org/">Gen Z for Change</a></li><li>A conversation with creative technologist Malik Afegbua, on his project <a href="https://elderseries.carrd.co/">The Elder Series</a></li><li><a href="https://www.mozillafoundation.org/en/meet-mozilla/nabiha-syed/">Nabiha Syed</a> and <a href="https://helenkingturvey.org/">Helen Turvey</a> will also reflect on how this Mozfest went, and what they hope to see for the future of the festival in the coming years</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><strong>Post Production by </strong><a href="https://www.podcasts.london/"><strong>Sarah Myles</strong></a><strong> | Pre Production by </strong><a href="https://georgiaiacovou.com/"><strong>Georgia Iacovou</strong></a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/922463d3/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Who Knows? Fact-Finding in a Failing State w/ HRDAG and Data &amp; Society</title>
      <itunes:episode>87</itunes:episode>
      <podcast:episode>87</podcast:episode>
      <itunes:title>Who Knows? Fact-Finding in a Failing State w/ HRDAG and Data &amp; Society</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">13e63ac6-1f34-469b-8956-094cfe039ac9</guid>
      <link>https://share.transistor.fm/s/6097a58a</link>
      <description>
        <![CDATA[<p>Everything is happening so fast. And a lot of it’s bad. What can research and science organizations do when issues are complex, fast-moving, and super important?</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/who-knows-independent-researchers-in-a-platform-era-w-brandi-geurkink"><strong>Independent Researchers in a Platform Era w/ Brandi Guerkink</strong></a><strong><br></strong><br></p><p>Building knowledge is more important than ever in times like these. This week, we have three guests. Megan Price from the Human Rights Data Analysis Group (<a href="https://hrdag.org/">HRDAG</a>) shares how statistics and data science can be used to get justice. Janet Haven and Charlton McIlwan from Data &amp; Society explore the role that research institutions can offer to bridge research knowledge and policy prescription.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://hrdag.org/2013/04/10/hrdag-trial-jose-efrain-rios-montt/">HRDAG’s involvement in the trial of José Efraín Ríon Montt</a></li><li><a href="https://www.bbc.co.uk/news/world-latin-america-19635877">A profile of Guatemala and timeline of its conflict</a> — BBC (last updated in 2024)</li><li><a href="https://rss.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1740-9713.2016.00960.x">To Protect and Serve?</a> — a study on predictive policing by William Isaac and Kristian Lum</li><li><a href="https://theappeal.org/setting-the-record-straight-on-predictive-policing-and-race-fe588b457ca2/">An article about the above study</a> — The Appeal</li><li><a href="https://hrdag.org/2025/10/21/hrdag-takes-a-stand-against-tyranny-in-the-united-states/">HRDAG’s stand against tyranny</a></li><li>More on <a href="https://datasociety.net/events/understanding-ai/">Understanding AI</a> — Data &amp; Society’s event series with the New York Public Library</li><li><a href="https://datasociety.net/people/haven-janet/">About Janet Haven</a>, Executive Director of Data &amp; Society</li><li><a href="https://datasociety.net/people/mcilwain-charlton-d/">About Charlton McIlwan</a>, board president of Data &amp; Society</li><li><a href="https://nissenbaum.tech.cornell.edu/papers/Bias%20in%20Computer%20Systems.pdf">Bias in Computer Systems</a> by Helen Nissenbaum</li><li><a href="https://www.criticalracedigitalstudies.com/">Center for Critical Race and Digital Studies</a></li><li>If you want to hear more about the history of D&amp;S, the full conversation is up on Youtube (add link when we have).</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles </a>| Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Everything is happening so fast. And a lot of it’s bad. What can research and science organizations do when issues are complex, fast-moving, and super important?</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/who-knows-independent-researchers-in-a-platform-era-w-brandi-geurkink"><strong>Independent Researchers in a Platform Era w/ Brandi Guerkink</strong></a><strong><br></strong><br></p><p>Building knowledge is more important than ever in times like these. This week, we have three guests. Megan Price from the Human Rights Data Analysis Group (<a href="https://hrdag.org/">HRDAG</a>) shares how statistics and data science can be used to get justice. Janet Haven and Charlton McIlwan from Data &amp; Society explore the role that research institutions can offer to bridge research knowledge and policy prescription.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://hrdag.org/2013/04/10/hrdag-trial-jose-efrain-rios-montt/">HRDAG’s involvement in the trial of José Efraín Ríon Montt</a></li><li><a href="https://www.bbc.co.uk/news/world-latin-america-19635877">A profile of Guatemala and timeline of its conflict</a> — BBC (last updated in 2024)</li><li><a href="https://rss.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1740-9713.2016.00960.x">To Protect and Serve?</a> — a study on predictive policing by William Isaac and Kristian Lum</li><li><a href="https://theappeal.org/setting-the-record-straight-on-predictive-policing-and-race-fe588b457ca2/">An article about the above study</a> — The Appeal</li><li><a href="https://hrdag.org/2025/10/21/hrdag-takes-a-stand-against-tyranny-in-the-united-states/">HRDAG’s stand against tyranny</a></li><li>More on <a href="https://datasociety.net/events/understanding-ai/">Understanding AI</a> — Data &amp; Society’s event series with the New York Public Library</li><li><a href="https://datasociety.net/people/haven-janet/">About Janet Haven</a>, Executive Director of Data &amp; Society</li><li><a href="https://datasociety.net/people/mcilwain-charlton-d/">About Charlton McIlwan</a>, board president of Data &amp; Society</li><li><a href="https://nissenbaum.tech.cornell.edu/papers/Bias%20in%20Computer%20Systems.pdf">Bias in Computer Systems</a> by Helen Nissenbaum</li><li><a href="https://www.criticalracedigitalstudies.com/">Center for Critical Race and Digital Studies</a></li><li>If you want to hear more about the history of D&amp;S, the full conversation is up on Youtube (add link when we have).</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles </a>| Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 05 Dec 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/6097a58a/2c56b428.mp3" length="85422071" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/M3SAXdLA24G5NPU4n87e8GzpzBptCFyuxvSlFGkw8Q4/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85NmZh/N2I0MDI4ZWJlODEz/Njc5NDkyNzgxM2Iy/YTcxNS5wbmc.jpg"/>
      <itunes:duration>3557</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Everything is happening so fast. And a lot of it’s bad. What can research and science organizations do when issues are complex, fast-moving, and super important?</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/who-knows-independent-researchers-in-a-platform-era-w-brandi-geurkink"><strong>Independent Researchers in a Platform Era w/ Brandi Guerkink</strong></a><strong><br></strong><br></p><p>Building knowledge is more important than ever in times like these. This week, we have three guests. Megan Price from the Human Rights Data Analysis Group (<a href="https://hrdag.org/">HRDAG</a>) shares how statistics and data science can be used to get justice. Janet Haven and Charlton McIlwan from Data &amp; Society explore the role that research institutions can offer to bridge research knowledge and policy prescription.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://hrdag.org/2013/04/10/hrdag-trial-jose-efrain-rios-montt/">HRDAG’s involvement in the trial of José Efraín Ríon Montt</a></li><li><a href="https://www.bbc.co.uk/news/world-latin-america-19635877">A profile of Guatemala and timeline of its conflict</a> — BBC (last updated in 2024)</li><li><a href="https://rss.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1740-9713.2016.00960.x">To Protect and Serve?</a> — a study on predictive policing by William Isaac and Kristian Lum</li><li><a href="https://theappeal.org/setting-the-record-straight-on-predictive-policing-and-race-fe588b457ca2/">An article about the above study</a> — The Appeal</li><li><a href="https://hrdag.org/2025/10/21/hrdag-takes-a-stand-against-tyranny-in-the-united-states/">HRDAG’s stand against tyranny</a></li><li>More on <a href="https://datasociety.net/events/understanding-ai/">Understanding AI</a> — Data &amp; Society’s event series with the New York Public Library</li><li><a href="https://datasociety.net/people/haven-janet/">About Janet Haven</a>, Executive Director of Data &amp; Society</li><li><a href="https://datasociety.net/people/mcilwain-charlton-d/">About Charlton McIlwan</a>, board president of Data &amp; Society</li><li><a href="https://nissenbaum.tech.cornell.edu/papers/Bias%20in%20Computer%20Systems.pdf">Bias in Computer Systems</a> by Helen Nissenbaum</li><li><a href="https://www.criticalracedigitalstudies.com/">Center for Critical Race and Digital Studies</a></li><li>If you want to hear more about the history of D&amp;S, the full conversation is up on Youtube (add link when we have).</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles </a>| Pre Production by <a href="https://georgiaiacovou.com/">Georgia Iacovou</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/6097a58a/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Who Knows? Independent Researchers in a Platform Era w/ Brandi Geurkink</title>
      <itunes:episode>86</itunes:episode>
      <podcast:episode>86</podcast:episode>
      <itunes:title>Who Knows? Independent Researchers in a Platform Era w/ Brandi Geurkink</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2ca8fcc2-d111-4c9c-84b9-76a8c1879770</guid>
      <link>https://share.transistor.fm/s/0f8946c4</link>
      <description>
        <![CDATA[<p>Imagine doing tech research… but from outside the tech industry? What an idea…</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/nodestar-turning-networks-into-knowledge-w-andrew-trask"><strong>Nodestar: Turning Networks into Knowledge w/ Andrew Trask</strong></a><strong><br></strong><br></p><p>So much of tech research happens within the tech industry itself, because it requires data access, funding, and compute. But what the tech industry has in resources, it lacks in independence, scruples, and a public interest imperative. Alix is joined by Brandi Guerkink from The Coalition of Independent Tech Research to discuss her work at a time where platforms have never been so opaque, and funding has never been so sparse</p><p><strong>Further Reading &amp; Resources:</strong></p><ul><li><a href="https://independenttechresearch.org/about-us/">More about Brandi and The Coalition</a></li><li><a href="https://dl.acm.org/doi/pdf/10.1145/3487552.3487859">Understanding Engagement with U.S. (Mis)Information News Sources on Facebook</a> by Laura Edelson &amp; Dan McCoy</li><li><a href="https://engineering.nyu.edu/student/laura-edelson">More on Laura Edelson</a></li><li><a href="https://damonmccoy.com/">More on Dan McCoy</a></li><li><a href="https://www.politico.com/news/2025/09/03/in-congress-uks-nigel-farage-attacks-his-own-government-on-tech-rules-00541350">Jim Jordan bringing in Nigel Farage from the UK to legitimise his attacks on EU tech regulations</a> — Politico</li><li><a href="https://news.bgov.com/bloomberg-government-news/cruz-bill-takes-aim-at-alleged-biden-censorship-of-social-media">Ted Cruz on preventing jawboning &amp; government censorship of social media</a> — Bloomberg</li><li><a href="https://www.theguardian.com/technology/2024/mar/25/elon-musk-hate-speech-lawsuit">Judge dismisses ‘vapid’ Elon Musk lawsuit against group that cataloged racist content on X</a> — The Guardian</li><li><a href="https://counterhate.com/blog/elon-musk-vs-ccdh-nonprofit-wins-dismissal-of-baseless-and-intimidatory-lawsuit/">See the CCDH’s blog post on getting the case thrown out</a></li><li><a href="https://www.hardresetmedia.com/p/platforms-are-blocking-independent">Platforms are blocking independent researchers from investigating deepfakes</a> by Ariella Steinhorn</li></ul><p><strong>Disclosure:</strong> This guest is a PR client of our consultancy team. As always, the conversation reflects our genuine interest in their work and ideas.</p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Imagine doing tech research… but from outside the tech industry? What an idea…</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/nodestar-turning-networks-into-knowledge-w-andrew-trask"><strong>Nodestar: Turning Networks into Knowledge w/ Andrew Trask</strong></a><strong><br></strong><br></p><p>So much of tech research happens within the tech industry itself, because it requires data access, funding, and compute. But what the tech industry has in resources, it lacks in independence, scruples, and a public interest imperative. Alix is joined by Brandi Guerkink from The Coalition of Independent Tech Research to discuss her work at a time where platforms have never been so opaque, and funding has never been so sparse</p><p><strong>Further Reading &amp; Resources:</strong></p><ul><li><a href="https://independenttechresearch.org/about-us/">More about Brandi and The Coalition</a></li><li><a href="https://dl.acm.org/doi/pdf/10.1145/3487552.3487859">Understanding Engagement with U.S. (Mis)Information News Sources on Facebook</a> by Laura Edelson &amp; Dan McCoy</li><li><a href="https://engineering.nyu.edu/student/laura-edelson">More on Laura Edelson</a></li><li><a href="https://damonmccoy.com/">More on Dan McCoy</a></li><li><a href="https://www.politico.com/news/2025/09/03/in-congress-uks-nigel-farage-attacks-his-own-government-on-tech-rules-00541350">Jim Jordan bringing in Nigel Farage from the UK to legitimise his attacks on EU tech regulations</a> — Politico</li><li><a href="https://news.bgov.com/bloomberg-government-news/cruz-bill-takes-aim-at-alleged-biden-censorship-of-social-media">Ted Cruz on preventing jawboning &amp; government censorship of social media</a> — Bloomberg</li><li><a href="https://www.theguardian.com/technology/2024/mar/25/elon-musk-hate-speech-lawsuit">Judge dismisses ‘vapid’ Elon Musk lawsuit against group that cataloged racist content on X</a> — The Guardian</li><li><a href="https://counterhate.com/blog/elon-musk-vs-ccdh-nonprofit-wins-dismissal-of-baseless-and-intimidatory-lawsuit/">See the CCDH’s blog post on getting the case thrown out</a></li><li><a href="https://www.hardresetmedia.com/p/platforms-are-blocking-independent">Platforms are blocking independent researchers from investigating deepfakes</a> by Ariella Steinhorn</li></ul><p><strong>Disclosure:</strong> This guest is a PR client of our consultancy team. As always, the conversation reflects our genuine interest in their work and ideas.</p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </content:encoded>
      <pubDate>Fri, 28 Nov 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/0f8946c4/4af1d153.mp3" length="69541675" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/bNbAmOmdSamZfFGBYAd_OEDozUjEVFp0x_MnMmaDx_k/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83ZWRm/NWYyMDY5NjY2Y2I1/MzVhNDc0NzRjMzg3/OTJmYy5wbmc.jpg"/>
      <itunes:duration>2895</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Imagine doing tech research… but from outside the tech industry? What an idea…</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/nodestar-turning-networks-into-knowledge-w-andrew-trask"><strong>Nodestar: Turning Networks into Knowledge w/ Andrew Trask</strong></a><strong><br></strong><br></p><p>So much of tech research happens within the tech industry itself, because it requires data access, funding, and compute. But what the tech industry has in resources, it lacks in independence, scruples, and a public interest imperative. Alix is joined by Brandi Guerkink from The Coalition of Independent Tech Research to discuss her work at a time where platforms have never been so opaque, and funding has never been so sparse</p><p><strong>Further Reading &amp; Resources:</strong></p><ul><li><a href="https://independenttechresearch.org/about-us/">More about Brandi and The Coalition</a></li><li><a href="https://dl.acm.org/doi/pdf/10.1145/3487552.3487859">Understanding Engagement with U.S. (Mis)Information News Sources on Facebook</a> by Laura Edelson &amp; Dan McCoy</li><li><a href="https://engineering.nyu.edu/student/laura-edelson">More on Laura Edelson</a></li><li><a href="https://damonmccoy.com/">More on Dan McCoy</a></li><li><a href="https://www.politico.com/news/2025/09/03/in-congress-uks-nigel-farage-attacks-his-own-government-on-tech-rules-00541350">Jim Jordan bringing in Nigel Farage from the UK to legitimise his attacks on EU tech regulations</a> — Politico</li><li><a href="https://news.bgov.com/bloomberg-government-news/cruz-bill-takes-aim-at-alleged-biden-censorship-of-social-media">Ted Cruz on preventing jawboning &amp; government censorship of social media</a> — Bloomberg</li><li><a href="https://www.theguardian.com/technology/2024/mar/25/elon-musk-hate-speech-lawsuit">Judge dismisses ‘vapid’ Elon Musk lawsuit against group that cataloged racist content on X</a> — The Guardian</li><li><a href="https://counterhate.com/blog/elon-musk-vs-ccdh-nonprofit-wins-dismissal-of-baseless-and-intimidatory-lawsuit/">See the CCDH’s blog post on getting the case thrown out</a></li><li><a href="https://www.hardresetmedia.com/p/platforms-are-blocking-independent">Platforms are blocking independent researchers from investigating deepfakes</a> by Ariella Steinhorn</li></ul><p><strong>Disclosure:</strong> This guest is a PR client of our consultancy team. As always, the conversation reflects our genuine interest in their work and ideas.</p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/0f8946c4/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Tres Publique: Algorithms in the French Welfare State w/ Soizic Pénicaud</title>
      <itunes:episode>85</itunes:episode>
      <podcast:episode>85</podcast:episode>
      <itunes:title>Tres Publique: Algorithms in the French Welfare State w/ Soizic Pénicaud</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">91a8df7a-dc02-439a-8b5c-6062234e7cb9</guid>
      <link>https://share.transistor.fm/s/e13a9cb3</link>
      <description>
        <![CDATA[<p>Governments around the world are using predictive systems to manage engagement with even the most vulnerable. Results are mixed.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/algorithmically-cutting-benefits-w-kevin-de-liban"><strong>Algorithmically Cutting Benefits w/ Kevin De Liban</strong></a><strong><br></strong><br></p><p>Luckily people like Soizic Pénicaud are working to prevent the modern welfare state from becoming a web of punishment of the most marginalised. Soizic has worked on algorithmic transparency both in and outside of a government context, and this week will share her journey from working on incrementally improving these systems (boring, ineffective, hard) — to escaping the slow pace of government and looking at the bigger picture of algorithmic governance, and how it can build better public benefit in France (fun, transformative, and a good challenge).</p><p>Soizic is working to shift political debates about opaque decision-making algorithms to focus on what they’re really about: the marginalised communities who’s lives are most effected by these systems.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://labo.societenumerique.gouv.fr/en/articles/lobservatoire-des-algorithmes-publics-vers-plus-de-transparence-de-laction-publique/">The Observatory of Public Algorithms</a> and their <a href="https://odap.fr/en/">Inventory</a></li><li><a href="https://www.amnesty.org/en/latest/news/2024/10/france-discriminatory-algorithm-used-by-the-social-security-agency-must-be-stopped/">The ongoing court case against the French welfare agency's risk-scoring algorithm</a></li><li><a href="http://soizicpenicaud.com/">More about Soizic</a></li><li>More on the <a href="https://www.opengovpartnership.org/members/france/commitments/FR0035/">Transparency of Public Algorithms roadmap</a> from <a href="https://www.opengovpartnership.org/people/etalab-task-force/">Etalab</a> — the task force Soizic was part of</li><li><a href="https://www.laquadrature.net/en/">La Quadrature du Net</a></li><li><a href="https://www.lighthousereports.com/investigation/frances-digital-inquisition/">France’s Digital Inquisition</a> — co-authored by Soizic in collaboration with Lighthouse Reports, 2023</li><li><a href="https://www.theguardian.com/technology/2025/jan/27/ai-prototypes-uk-welfare-system-dropped">AI prototypes for UK welfare system dropped as officials lament ‘false starts’</a> — The Guardian Jan 2025</li><li><a href="https://datajusticelab.org/projects/automating-public-services-learning-from-cancelled-systems/">Learning from Cancelled Systems</a> by Data Justice Lab</li><li><a href="https://dl.acm.org/doi/10.1145/3630106.3658910">The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment</a> — by Nari Johnson et al, featured in FAccT 2024</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we host live shows and do other work that you will definitely be interested in!**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Governments around the world are using predictive systems to manage engagement with even the most vulnerable. Results are mixed.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/algorithmically-cutting-benefits-w-kevin-de-liban"><strong>Algorithmically Cutting Benefits w/ Kevin De Liban</strong></a><strong><br></strong><br></p><p>Luckily people like Soizic Pénicaud are working to prevent the modern welfare state from becoming a web of punishment of the most marginalised. Soizic has worked on algorithmic transparency both in and outside of a government context, and this week will share her journey from working on incrementally improving these systems (boring, ineffective, hard) — to escaping the slow pace of government and looking at the bigger picture of algorithmic governance, and how it can build better public benefit in France (fun, transformative, and a good challenge).</p><p>Soizic is working to shift political debates about opaque decision-making algorithms to focus on what they’re really about: the marginalised communities who’s lives are most effected by these systems.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://labo.societenumerique.gouv.fr/en/articles/lobservatoire-des-algorithmes-publics-vers-plus-de-transparence-de-laction-publique/">The Observatory of Public Algorithms</a> and their <a href="https://odap.fr/en/">Inventory</a></li><li><a href="https://www.amnesty.org/en/latest/news/2024/10/france-discriminatory-algorithm-used-by-the-social-security-agency-must-be-stopped/">The ongoing court case against the French welfare agency's risk-scoring algorithm</a></li><li><a href="http://soizicpenicaud.com/">More about Soizic</a></li><li>More on the <a href="https://www.opengovpartnership.org/members/france/commitments/FR0035/">Transparency of Public Algorithms roadmap</a> from <a href="https://www.opengovpartnership.org/people/etalab-task-force/">Etalab</a> — the task force Soizic was part of</li><li><a href="https://www.laquadrature.net/en/">La Quadrature du Net</a></li><li><a href="https://www.lighthousereports.com/investigation/frances-digital-inquisition/">France’s Digital Inquisition</a> — co-authored by Soizic in collaboration with Lighthouse Reports, 2023</li><li><a href="https://www.theguardian.com/technology/2025/jan/27/ai-prototypes-uk-welfare-system-dropped">AI prototypes for UK welfare system dropped as officials lament ‘false starts’</a> — The Guardian Jan 2025</li><li><a href="https://datajusticelab.org/projects/automating-public-services-learning-from-cancelled-systems/">Learning from Cancelled Systems</a> by Data Justice Lab</li><li><a href="https://dl.acm.org/doi/10.1145/3630106.3658910">The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment</a> — by Nari Johnson et al, featured in FAccT 2024</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we host live shows and do other work that you will definitely be interested in!**</p>]]>
      </content:encoded>
      <pubDate>Fri, 21 Nov 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/e13a9cb3/53e93a83.mp3" length="75485653" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/XumHXiar25jcKaCPlTTRsNgVB52aVch8PS4nrR5iq64/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85Zjgz/NDlmNmY1YTRiNTM3/NjJiZWFlMzE0YjFh/MDAxZS5wbmc.jpg"/>
      <itunes:duration>3142</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Governments around the world are using predictive systems to manage engagement with even the most vulnerable. Results are mixed.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/algorithmically-cutting-benefits-w-kevin-de-liban"><strong>Algorithmically Cutting Benefits w/ Kevin De Liban</strong></a><strong><br></strong><br></p><p>Luckily people like Soizic Pénicaud are working to prevent the modern welfare state from becoming a web of punishment of the most marginalised. Soizic has worked on algorithmic transparency both in and outside of a government context, and this week will share her journey from working on incrementally improving these systems (boring, ineffective, hard) — to escaping the slow pace of government and looking at the bigger picture of algorithmic governance, and how it can build better public benefit in France (fun, transformative, and a good challenge).</p><p>Soizic is working to shift political debates about opaque decision-making algorithms to focus on what they’re really about: the marginalised communities who’s lives are most effected by these systems.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://labo.societenumerique.gouv.fr/en/articles/lobservatoire-des-algorithmes-publics-vers-plus-de-transparence-de-laction-publique/">The Observatory of Public Algorithms</a> and their <a href="https://odap.fr/en/">Inventory</a></li><li><a href="https://www.amnesty.org/en/latest/news/2024/10/france-discriminatory-algorithm-used-by-the-social-security-agency-must-be-stopped/">The ongoing court case against the French welfare agency's risk-scoring algorithm</a></li><li><a href="http://soizicpenicaud.com/">More about Soizic</a></li><li>More on the <a href="https://www.opengovpartnership.org/members/france/commitments/FR0035/">Transparency of Public Algorithms roadmap</a> from <a href="https://www.opengovpartnership.org/people/etalab-task-force/">Etalab</a> — the task force Soizic was part of</li><li><a href="https://www.laquadrature.net/en/">La Quadrature du Net</a></li><li><a href="https://www.lighthousereports.com/investigation/frances-digital-inquisition/">France’s Digital Inquisition</a> — co-authored by Soizic in collaboration with Lighthouse Reports, 2023</li><li><a href="https://www.theguardian.com/technology/2025/jan/27/ai-prototypes-uk-welfare-system-dropped">AI prototypes for UK welfare system dropped as officials lament ‘false starts’</a> — The Guardian Jan 2025</li><li><a href="https://datajusticelab.org/projects/automating-public-services-learning-from-cancelled-systems/">Learning from Cancelled Systems</a> by Data Justice Lab</li><li><a href="https://dl.acm.org/doi/10.1145/3630106.3658910">The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment</a> — by Nari Johnson et al, featured in FAccT 2024</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we host live shows and do other work that you will definitely be interested in!**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/e13a9cb3/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Straight to Video: From Rodney King to Sora w/ Sam Gregory</title>
      <itunes:episode>84</itunes:episode>
      <podcast:episode>84</podcast:episode>
      <itunes:title>Straight to Video: From Rodney King to Sora w/ Sam Gregory</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d4441cdc-b09b-4179-afbe-0391a407ebfa</guid>
      <link>https://share.transistor.fm/s/d96db891</link>
      <description>
        <![CDATA[<p>Seeing is believing. Right? But what happens when we lose trust in the reproductive media put in front of us?</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/the-toxic-relationship-between-ai-and-journalism-w-nic-dawes"><strong>The Toxic Relationship Between AI and Journalism w/ Nic Dawes</strong></a><strong><br></strong><br></p><p>We talked to a global expert and leading voice on this issue for the past 20 years, Sam Gregory to get his take. We started way back in 1992 when Rodney King was assaulted by 4 police officers in Los Angeles. Police brutality was (and is) commonplace, but something different happened in this case. Someone used a camcorder and caught it on video. It changed our understanding about the role video could play in accountability. And in the past 30 years, we’ve gone from seeking video as evidence and advocacy, to AI slop threatening to seismically reshape our shared realities.</p><p>Now apps like Sora provide impersonation-as-entertainment. How did we get here?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.npr.org/2017/04/26/524744989/when-la-erupted-in-anger-a-look-back-at-the-rodney-king-riots">More on the riots following Rodney King’s murder</a> — NPR</li><li>More about <a href="https://www.witness.org/portfolio_page/sam-gregory/">Sam and Witness</a></li><li><a href="https://guardianproject.info/apps/org.witness.sscphase1/">ObscuraCam</a> — a privacy-preserving camera app from WITNESS and The Guardian Project</li><li><a href="https://c2pa.org/">C2PA: the Coalition for Content Provenance and Authenticity</a></li><li><a href="https://www.gen-ai.witness.org/">Deepfakes Rapid Response Force</a> by WITNESS</li></ul><p><a href="https://www.saysmaybe.com/newsletter">Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Seeing is believing. Right? But what happens when we lose trust in the reproductive media put in front of us?</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/the-toxic-relationship-between-ai-and-journalism-w-nic-dawes"><strong>The Toxic Relationship Between AI and Journalism w/ Nic Dawes</strong></a><strong><br></strong><br></p><p>We talked to a global expert and leading voice on this issue for the past 20 years, Sam Gregory to get his take. We started way back in 1992 when Rodney King was assaulted by 4 police officers in Los Angeles. Police brutality was (and is) commonplace, but something different happened in this case. Someone used a camcorder and caught it on video. It changed our understanding about the role video could play in accountability. And in the past 30 years, we’ve gone from seeking video as evidence and advocacy, to AI slop threatening to seismically reshape our shared realities.</p><p>Now apps like Sora provide impersonation-as-entertainment. How did we get here?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.npr.org/2017/04/26/524744989/when-la-erupted-in-anger-a-look-back-at-the-rodney-king-riots">More on the riots following Rodney King’s murder</a> — NPR</li><li>More about <a href="https://www.witness.org/portfolio_page/sam-gregory/">Sam and Witness</a></li><li><a href="https://guardianproject.info/apps/org.witness.sscphase1/">ObscuraCam</a> — a privacy-preserving camera app from WITNESS and The Guardian Project</li><li><a href="https://c2pa.org/">C2PA: the Coalition for Content Provenance and Authenticity</a></li><li><a href="https://www.gen-ai.witness.org/">Deepfakes Rapid Response Force</a> by WITNESS</li></ul><p><a href="https://www.saysmaybe.com/newsletter">Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 14 Nov 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/d96db891/0ea72f57.mp3" length="86825215" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/eeWs9GnoHh-AzTK9urHnG4p3PjasR-vlIamjzKy1W60/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mNDM3/OWFmM2M2YTk5ZjVl/MDdjY2IzNjI0OWY2/OWY5MC5wbmc.jpg"/>
      <itunes:duration>3615</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Seeing is believing. Right? But what happens when we lose trust in the reproductive media put in front of us?</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/the-toxic-relationship-between-ai-and-journalism-w-nic-dawes"><strong>The Toxic Relationship Between AI and Journalism w/ Nic Dawes</strong></a><strong><br></strong><br></p><p>We talked to a global expert and leading voice on this issue for the past 20 years, Sam Gregory to get his take. We started way back in 1992 when Rodney King was assaulted by 4 police officers in Los Angeles. Police brutality was (and is) commonplace, but something different happened in this case. Someone used a camcorder and caught it on video. It changed our understanding about the role video could play in accountability. And in the past 30 years, we’ve gone from seeking video as evidence and advocacy, to AI slop threatening to seismically reshape our shared realities.</p><p>Now apps like Sora provide impersonation-as-entertainment. How did we get here?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.npr.org/2017/04/26/524744989/when-la-erupted-in-anger-a-look-back-at-the-rodney-king-riots">More on the riots following Rodney King’s murder</a> — NPR</li><li>More about <a href="https://www.witness.org/portfolio_page/sam-gregory/">Sam and Witness</a></li><li><a href="https://guardianproject.info/apps/org.witness.sscphase1/">ObscuraCam</a> — a privacy-preserving camera app from WITNESS and The Guardian Project</li><li><a href="https://c2pa.org/">C2PA: the Coalition for Content Provenance and Authenticity</a></li><li><a href="https://www.gen-ai.witness.org/">Deepfakes Rapid Response Force</a> by WITNESS</li></ul><p><a href="https://www.saysmaybe.com/newsletter">Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</p><p>Post Production by <a href="https://www.podcasts.london/">Sarah Myles</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/d96db891/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Toxic Relationship Between AI &amp; Journalism w/ Nic Dawes</title>
      <itunes:episode>83</itunes:episode>
      <podcast:episode>83</podcast:episode>
      <itunes:title>The Toxic Relationship Between AI &amp; Journalism w/ Nic Dawes</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">cd7eb2fb-8729-4ce2-9f07-54ab1c3b44ee</guid>
      <link>https://share.transistor.fm/s/d2c24332</link>
      <description>
        <![CDATA[<p>What happens when AI models try to fill the gaping hole in the media landscape where journalists should be?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/short-musk-reanimating-apartheid-w-nic-dawes"><strong>Reanimating Apartheid w/ Nic Dawes</strong></a><strong><br></strong><br></p><p>This week Alix is joined by Nic Dawes, who until very recently ran the non-profit newsroom <a href="https://www.thecity.nyc/">The City</a>. In this conversation we explore journalism’s new found toxic relationship with AI and big tech: can journalists meaningfully use AI in their work? If a model summarises a few documents, does that add a new layer of efficiency, or inadvertently oversimplify? And what can we learn from big tech positioning itself as a helpful friend to journalism during the Search era?</p><p>Beyond the just accurate relaying of facts, journalistic organisations also represent an entire backlog of valuable training data for AI companies. If you don’t have the same resources as the NYT, suing for copyright infringement isn’t an option — so what then? Nic says we have to break out of the false binary of ‘if you can’t beat them, join them!’</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.npr.org/2025/03/26/nx-s1-5288157/new-york-times-openai-copyright-case-goes-forward">Judge allows ‘New York Times’ copyright case against OpenAI to go forward</a> — NPR</li><li><a href="https://reutersinstitute.politics.ox.ac.uk/generative-ai-and-news-report-2025-how-people-think-about-ais-role-journalism-and-society">Generative AI and news report 2025: How people think about AI’s role in journalism and society</a> — Reuters Institute</li><li>An example of The City’s investigative reporting: <a href="https://www.thecity.nyc/2022/02/23/when-private-equity-came-knocking-these-bronx-renters-were-given-two-options-buy-or-get-out/">private equity firms buying up property in the Bronx</a> — 2022</li><li><a href="https://shorensteincenter.org/resource/intimacy-dividend-ai-might-transform-news-media-consumption/">The Intimacy Dividend</a> — Shuwei Fang</li><li>Sam Altman on Twitter <a href="https://x.com/sama/status/1978129344598827128">announcing</a> that they’ve improved ChatGPT to be mindful of the mental health effects — “We realize this made it less useful/enjoyable to many users who had no mental health problems, but…”</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>What happens when AI models try to fill the gaping hole in the media landscape where journalists should be?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/short-musk-reanimating-apartheid-w-nic-dawes"><strong>Reanimating Apartheid w/ Nic Dawes</strong></a><strong><br></strong><br></p><p>This week Alix is joined by Nic Dawes, who until very recently ran the non-profit newsroom <a href="https://www.thecity.nyc/">The City</a>. In this conversation we explore journalism’s new found toxic relationship with AI and big tech: can journalists meaningfully use AI in their work? If a model summarises a few documents, does that add a new layer of efficiency, or inadvertently oversimplify? And what can we learn from big tech positioning itself as a helpful friend to journalism during the Search era?</p><p>Beyond the just accurate relaying of facts, journalistic organisations also represent an entire backlog of valuable training data for AI companies. If you don’t have the same resources as the NYT, suing for copyright infringement isn’t an option — so what then? Nic says we have to break out of the false binary of ‘if you can’t beat them, join them!’</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.npr.org/2025/03/26/nx-s1-5288157/new-york-times-openai-copyright-case-goes-forward">Judge allows ‘New York Times’ copyright case against OpenAI to go forward</a> — NPR</li><li><a href="https://reutersinstitute.politics.ox.ac.uk/generative-ai-and-news-report-2025-how-people-think-about-ais-role-journalism-and-society">Generative AI and news report 2025: How people think about AI’s role in journalism and society</a> — Reuters Institute</li><li>An example of The City’s investigative reporting: <a href="https://www.thecity.nyc/2022/02/23/when-private-equity-came-knocking-these-bronx-renters-were-given-two-options-buy-or-get-out/">private equity firms buying up property in the Bronx</a> — 2022</li><li><a href="https://shorensteincenter.org/resource/intimacy-dividend-ai-might-transform-news-media-consumption/">The Intimacy Dividend</a> — Shuwei Fang</li><li>Sam Altman on Twitter <a href="https://x.com/sama/status/1978129344598827128">announcing</a> that they’ve improved ChatGPT to be mindful of the mental health effects — “We realize this made it less useful/enjoyable to many users who had no mental health problems, but…”</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </content:encoded>
      <pubDate>Fri, 07 Nov 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/d2c24332/ef666378.mp3" length="60232205" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Om90ZINRtyvmVgoFELjrH0kfE6Y72Lqv2Gh7J7bNEJw/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iYmIy/ZTdjNjRhNTAwOGQ3/YWRmMDkxZWVkNmFk/NDJjNC5wbmc.jpg"/>
      <itunes:duration>2507</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>What happens when AI models try to fill the gaping hole in the media landscape where journalists should be?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/short-musk-reanimating-apartheid-w-nic-dawes"><strong>Reanimating Apartheid w/ Nic Dawes</strong></a><strong><br></strong><br></p><p>This week Alix is joined by Nic Dawes, who until very recently ran the non-profit newsroom <a href="https://www.thecity.nyc/">The City</a>. In this conversation we explore journalism’s new found toxic relationship with AI and big tech: can journalists meaningfully use AI in their work? If a model summarises a few documents, does that add a new layer of efficiency, or inadvertently oversimplify? And what can we learn from big tech positioning itself as a helpful friend to journalism during the Search era?</p><p>Beyond the just accurate relaying of facts, journalistic organisations also represent an entire backlog of valuable training data for AI companies. If you don’t have the same resources as the NYT, suing for copyright infringement isn’t an option — so what then? Nic says we have to break out of the false binary of ‘if you can’t beat them, join them!’</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.npr.org/2025/03/26/nx-s1-5288157/new-york-times-openai-copyright-case-goes-forward">Judge allows ‘New York Times’ copyright case against OpenAI to go forward</a> — NPR</li><li><a href="https://reutersinstitute.politics.ox.ac.uk/generative-ai-and-news-report-2025-how-people-think-about-ais-role-journalism-and-society">Generative AI and news report 2025: How people think about AI’s role in journalism and society</a> — Reuters Institute</li><li>An example of The City’s investigative reporting: <a href="https://www.thecity.nyc/2022/02/23/when-private-equity-came-knocking-these-bronx-renters-were-given-two-options-buy-or-get-out/">private equity firms buying up property in the Bronx</a> — 2022</li><li><a href="https://shorensteincenter.org/resource/intimacy-dividend-ai-might-transform-news-media-consumption/">The Intimacy Dividend</a> — Shuwei Fang</li><li>Sam Altman on Twitter <a href="https://x.com/sama/status/1978129344598827128">announcing</a> that they’ve improved ChatGPT to be mindful of the mental health effects — “We realize this made it less useful/enjoyable to many users who had no mental health problems, but…”</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/d2c24332/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Unlearning in the AI Era w/ Nabiha Syed at Mozilla Foundation</title>
      <itunes:episode>82</itunes:episode>
      <podcast:episode>82</podcast:episode>
      <itunes:title>Unlearning in the AI Era w/ Nabiha Syed at Mozilla Foundation</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">abb5766b-7f91-4d9a-bee7-60231ee2f14e</guid>
      <link>https://share.transistor.fm/s/852c8289</link>
      <description>
        <![CDATA[<p>Mozilla Foundation wants to chart a new path in the AI era. But what is its role now and how can it help reshape the impacts and opportunities of technology for… everyone?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/defying-datafication-w-dr-abeba-birhane-plus-paris-ai-action-summit"><strong>Defying Datafication w/ Abeba Birhane</strong></a><strong><br></strong><br></p><p>Alix sat down with Nabiha Syed to chat through her first year as the new leader of Mozilla Foundation. How does she think about strategy in this moment? What role does she want the foundation to play? And crucially, how is she stewarding a community of human-centered technology builders in a time of hyper-scale and unchecked speculation?</p><p>As Nabiha says, “restraint is a design principle too”.</p><p><strong>Plug: We’ll be at MozFest this year broadcasting live and connecting with all kinds of folks. If you’re feeling the FOMO, be on the look out for episodes we produce about our time there.<br></strong><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://youtu.be/X0nYRIJs-wM">Watch this episode on YouTube</a></li><li><a href="https://www.mozillafoundation.org/en/creative-futures-imaginative-intelligences/">Imaginative Intelligences</a> — a programme of artist assemblies run by Mozilla Foundation</li><li><a href="https://www.mozillafoundation.org/en/nothing-personal/">Nothing Personal</a> — a new counterculture editorial platform from the Mozilla Foundation</li><li>More about <a href="https://www.mozillafestival.org/en/">Mozfest</a></li><li>Nabiha on the <a href="https://www.youtube.com/live/9fdArtIvFYw?si=dtkzuIvUiiOPUyGs">Computer Says Maybe live show</a> at the 2025 AI Action Summit</li><li><a href="https://www.theregister.com/2025/08/17/nabiha_syed_remakes_mozilla_foundation/">Nabiha Syed remakes Mozilla Foundation in the era of Trump and AI</a> — The Register</li><li><a href="https://www.mozillafoundation.org/en/blog/why-im-joining-mozilla-as-executive-director/">Nabiha on why she joined MF as executive director</a> — MF Blog</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Mozilla Foundation wants to chart a new path in the AI era. But what is its role now and how can it help reshape the impacts and opportunities of technology for… everyone?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/defying-datafication-w-dr-abeba-birhane-plus-paris-ai-action-summit"><strong>Defying Datafication w/ Abeba Birhane</strong></a><strong><br></strong><br></p><p>Alix sat down with Nabiha Syed to chat through her first year as the new leader of Mozilla Foundation. How does she think about strategy in this moment? What role does she want the foundation to play? And crucially, how is she stewarding a community of human-centered technology builders in a time of hyper-scale and unchecked speculation?</p><p>As Nabiha says, “restraint is a design principle too”.</p><p><strong>Plug: We’ll be at MozFest this year broadcasting live and connecting with all kinds of folks. If you’re feeling the FOMO, be on the look out for episodes we produce about our time there.<br></strong><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://youtu.be/X0nYRIJs-wM">Watch this episode on YouTube</a></li><li><a href="https://www.mozillafoundation.org/en/creative-futures-imaginative-intelligences/">Imaginative Intelligences</a> — a programme of artist assemblies run by Mozilla Foundation</li><li><a href="https://www.mozillafoundation.org/en/nothing-personal/">Nothing Personal</a> — a new counterculture editorial platform from the Mozilla Foundation</li><li>More about <a href="https://www.mozillafestival.org/en/">Mozfest</a></li><li>Nabiha on the <a href="https://www.youtube.com/live/9fdArtIvFYw?si=dtkzuIvUiiOPUyGs">Computer Says Maybe live show</a> at the 2025 AI Action Summit</li><li><a href="https://www.theregister.com/2025/08/17/nabiha_syed_remakes_mozilla_foundation/">Nabiha Syed remakes Mozilla Foundation in the era of Trump and AI</a> — The Register</li><li><a href="https://www.mozillafoundation.org/en/blog/why-im-joining-mozilla-as-executive-director/">Nabiha on why she joined MF as executive director</a> — MF Blog</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </content:encoded>
      <pubDate>Fri, 31 Oct 2025 00:06:56 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/852c8289/01e927d5.mp3" length="66260713" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/cZJ6Rq6PSwUHds4pPAaFR3p42m_p6a1DwRZmbu2F3qg/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZmI5/Mjc1Yzg4YTIyMzQy/Y2ViMzg4NTA1Nzdj/ZGQ4ZS5wbmc.jpg"/>
      <itunes:duration>2759</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Mozilla Foundation wants to chart a new path in the AI era. But what is its role now and how can it help reshape the impacts and opportunities of technology for… everyone?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/defying-datafication-w-dr-abeba-birhane-plus-paris-ai-action-summit"><strong>Defying Datafication w/ Abeba Birhane</strong></a><strong><br></strong><br></p><p>Alix sat down with Nabiha Syed to chat through her first year as the new leader of Mozilla Foundation. How does she think about strategy in this moment? What role does she want the foundation to play? And crucially, how is she stewarding a community of human-centered technology builders in a time of hyper-scale and unchecked speculation?</p><p>As Nabiha says, “restraint is a design principle too”.</p><p><strong>Plug: We’ll be at MozFest this year broadcasting live and connecting with all kinds of folks. If you’re feeling the FOMO, be on the look out for episodes we produce about our time there.<br></strong><br></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://youtu.be/X0nYRIJs-wM">Watch this episode on YouTube</a></li><li><a href="https://www.mozillafoundation.org/en/creative-futures-imaginative-intelligences/">Imaginative Intelligences</a> — a programme of artist assemblies run by Mozilla Foundation</li><li><a href="https://www.mozillafoundation.org/en/nothing-personal/">Nothing Personal</a> — a new counterculture editorial platform from the Mozilla Foundation</li><li>More about <a href="https://www.mozillafestival.org/en/">Mozfest</a></li><li>Nabiha on the <a href="https://www.youtube.com/live/9fdArtIvFYw?si=dtkzuIvUiiOPUyGs">Computer Says Maybe live show</a> at the 2025 AI Action Summit</li><li><a href="https://www.theregister.com/2025/08/17/nabiha_syed_remakes_mozilla_foundation/">Nabiha Syed remakes Mozilla Foundation in the era of Trump and AI</a> — The Register</li><li><a href="https://www.mozillafoundation.org/en/blog/why-im-joining-mozilla-as-executive-director/">Nabiha on why she joined MF as executive director</a> — MF Blog</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/852c8289/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>You Seem Lonely. Have a Robot w/ Stevie Chancellor</title>
      <itunes:episode>81</itunes:episode>
      <podcast:episode>81</podcast:episode>
      <itunes:title>You Seem Lonely. Have a Robot w/ Stevie Chancellor</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">80e55e41-c4ba-4a1e-9c1e-12bef20c36c8</guid>
      <link>https://share.transistor.fm/s/fcd89fac</link>
      <description>
        <![CDATA[<p>Loneliness and mental health illnesses are rising in the US, while access to care dwindles — so a lot of people are turning to chatbots. Do chatbots work for therapy?</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/the-collective-intelligence-project-w-divya-siddarth-and-zarinah-agnew"><strong>The Collective Intelligence Project w/ Divya Siddarth and Zarinah Agnew</strong></a><strong><br></strong><br></p><p>Why are individuals are confiding in chatbots over qualified human therapists? <a href="https://steviechancellor.com/">Stevie Chancellor</a> explains why an LLM can’t replace a therapeutic relationship — but often there’s just no other choice. Turns out the chatbots designed specifically for therapy are even worse than general models like ChatGPT; Stevie shares her ideas on how LLMs could potentially be used — safely — for therapeutic support. This is really helpful primer on how to evaluate chatbots for specific, human-replacing tasks.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://steviechancellor.com/">Stevie’s paper on whether replacing therapists with LLMs is even possible</a> (it’s not)</li><li><a href="https://github.com/jlcmoore/llms-as-therapists/blob/main/README.md">See the research on Github</a></li><li><a href="https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/">People are Losing Their Loved Ones to AI-Fuelled Spiritual Fantasies</a> — Rolling Stone (May 2025)</li><li><a href="https://futurism.com/openai-investor-chatgpt-mental-health?utm_source=www.garbageday.email&amp;utm_medium=referral&amp;utm_campaign=the-tech-bros-are-making-themselves-sick">Silicon Valley VC Geoff Lewis becomes convinced that ChatGPT is telling him government secrets from the future</a></li><li><a href="https://www.psychiatry.org/news-room/news-releases/new-apa-poll-one-in-three-americans-feels-lonely-e">Loneliness considered a public health epidemic according to the APA</a></li><li><a href="https://www.independent.co.uk/tech/betterhelp-customer-data-ftc-settlement-b2293434.html">FTC orders online therapy company BetterHelp to pay damages of $7.8m</a></li><li><a href="https://www.reuters.com/sustainability/boards-policy-regulation/delta-plans-use-ai-ticket-pricing-draws-fire-us-lawmakers-2025-07-22/">Delta plans to use AI in ticket pricing draws fire from US lawmakers</a> — Reuters July 2025</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Loneliness and mental health illnesses are rising in the US, while access to care dwindles — so a lot of people are turning to chatbots. Do chatbots work for therapy?</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/the-collective-intelligence-project-w-divya-siddarth-and-zarinah-agnew"><strong>The Collective Intelligence Project w/ Divya Siddarth and Zarinah Agnew</strong></a><strong><br></strong><br></p><p>Why are individuals are confiding in chatbots over qualified human therapists? <a href="https://steviechancellor.com/">Stevie Chancellor</a> explains why an LLM can’t replace a therapeutic relationship — but often there’s just no other choice. Turns out the chatbots designed specifically for therapy are even worse than general models like ChatGPT; Stevie shares her ideas on how LLMs could potentially be used — safely — for therapeutic support. This is really helpful primer on how to evaluate chatbots for specific, human-replacing tasks.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://steviechancellor.com/">Stevie’s paper on whether replacing therapists with LLMs is even possible</a> (it’s not)</li><li><a href="https://github.com/jlcmoore/llms-as-therapists/blob/main/README.md">See the research on Github</a></li><li><a href="https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/">People are Losing Their Loved Ones to AI-Fuelled Spiritual Fantasies</a> — Rolling Stone (May 2025)</li><li><a href="https://futurism.com/openai-investor-chatgpt-mental-health?utm_source=www.garbageday.email&amp;utm_medium=referral&amp;utm_campaign=the-tech-bros-are-making-themselves-sick">Silicon Valley VC Geoff Lewis becomes convinced that ChatGPT is telling him government secrets from the future</a></li><li><a href="https://www.psychiatry.org/news-room/news-releases/new-apa-poll-one-in-three-americans-feels-lonely-e">Loneliness considered a public health epidemic according to the APA</a></li><li><a href="https://www.independent.co.uk/tech/betterhelp-customer-data-ftc-settlement-b2293434.html">FTC orders online therapy company BetterHelp to pay damages of $7.8m</a></li><li><a href="https://www.reuters.com/sustainability/boards-policy-regulation/delta-plans-use-ai-ticket-pricing-draws-fire-us-lawmakers-2025-07-22/">Delta plans to use AI in ticket pricing draws fire from US lawmakers</a> — Reuters July 2025</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </content:encoded>
      <pubDate>Fri, 24 Oct 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/fcd89fac/ec61cf58.mp3" length="76317241" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/RZDFKmY8feTEW8FtMszDmklpiZ0FxW9_v_6xs5xbY2g/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82NWNh/MTgyYmUyMzhjOWQy/NDc0YmVkMjI3ZTRl/YmMyOC5wbmc.jpg"/>
      <itunes:duration>3178</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Loneliness and mental health illnesses are rising in the US, while access to care dwindles — so a lot of people are turning to chatbots. Do chatbots work for therapy?</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/the-collective-intelligence-project-w-divya-siddarth-and-zarinah-agnew"><strong>The Collective Intelligence Project w/ Divya Siddarth and Zarinah Agnew</strong></a><strong><br></strong><br></p><p>Why are individuals are confiding in chatbots over qualified human therapists? <a href="https://steviechancellor.com/">Stevie Chancellor</a> explains why an LLM can’t replace a therapeutic relationship — but often there’s just no other choice. Turns out the chatbots designed specifically for therapy are even worse than general models like ChatGPT; Stevie shares her ideas on how LLMs could potentially be used — safely — for therapeutic support. This is really helpful primer on how to evaluate chatbots for specific, human-replacing tasks.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://steviechancellor.com/">Stevie’s paper on whether replacing therapists with LLMs is even possible</a> (it’s not)</li><li><a href="https://github.com/jlcmoore/llms-as-therapists/blob/main/README.md">See the research on Github</a></li><li><a href="https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/">People are Losing Their Loved Ones to AI-Fuelled Spiritual Fantasies</a> — Rolling Stone (May 2025)</li><li><a href="https://futurism.com/openai-investor-chatgpt-mental-health?utm_source=www.garbageday.email&amp;utm_medium=referral&amp;utm_campaign=the-tech-bros-are-making-themselves-sick">Silicon Valley VC Geoff Lewis becomes convinced that ChatGPT is telling him government secrets from the future</a></li><li><a href="https://www.psychiatry.org/news-room/news-releases/new-apa-poll-one-in-three-americans-feels-lonely-e">Loneliness considered a public health epidemic according to the APA</a></li><li><a href="https://www.independent.co.uk/tech/betterhelp-customer-data-ftc-settlement-b2293434.html">FTC orders online therapy company BetterHelp to pay damages of $7.8m</a></li><li><a href="https://www.reuters.com/sustainability/boards-policy-regulation/delta-plans-use-ai-ticket-pricing-draws-fire-us-lawmakers-2025-07-22/">Delta plans to use AI in ticket pricing draws fire from US lawmakers</a> — Reuters July 2025</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/fcd89fac/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Local Laws for Global Technologies w/ Hillary Ronen</title>
      <itunes:episode>80</itunes:episode>
      <podcast:episode>80</podcast:episode>
      <itunes:title>Local Laws for Global Technologies w/ Hillary Ronen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0f38a6ca-4b94-4c7a-b34f-4e696aff73a9</guid>
      <link>https://share.transistor.fm/s/99826c90</link>
      <description>
        <![CDATA[<p>What’s it like working as a local representative when you live next door to Silicon Valley?</p><p><strong>More like this: </strong><a href="https://saysmaybe.com/podcast/chasing-away-sidewalk-labs-w-bianca-wylie"><strong>Chasing Away Sidewalk Labs w/ Bianca Wylie</strong></a><strong><br></strong><br></p><p>When Hilary Ronen was on the board of supervisors for San Francisco, she had to make lots of decisions about technology. She felt unprepared. Now she sees local policymakers on the frontlines of a battle of resources and governance in an AI era, and is working to upskill them to make better decisions for their constituents. No degree in computer science required.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://drive.google.com/file/d/1liGqB7v7tob2TG9KQwPPN3tZet6OaOJF/view">Local Leadership in the Era of Artificial Intelligence and the Tech Oligarchy</a> by Hillary Ronen</li><li><a href="https://sfbos.org/supervisor-ronen-district-9">More on Hillary’s work as a Supervisor for SF</a></li><li><a href="https://48hills.org/2025/01/progressives-messaging-hard-choices-and-justice-the-48hills-interview/">Hillary Ronen on progressives, messaging, hard choices, and justice</a> — interview in 48Hills from January 2025</li><li>More about <a href="https://localprogress.org/">Local Progress</a></li><li><a href="https://localprogress.org/wp-content/uploads/2019/01/Confronting-Preemption.pdf">Confronting Preemption</a> — a short briefing by Local Progress</li><li><a href="https://statecourtreport.org/our-work/analysis-opinion/what-happens-when-state-and-local-laws-conflict">What Happens When State and Local Laws Conflict</a> — article on state-level preemption by State Court Report</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>What’s it like working as a local representative when you live next door to Silicon Valley?</p><p><strong>More like this: </strong><a href="https://saysmaybe.com/podcast/chasing-away-sidewalk-labs-w-bianca-wylie"><strong>Chasing Away Sidewalk Labs w/ Bianca Wylie</strong></a><strong><br></strong><br></p><p>When Hilary Ronen was on the board of supervisors for San Francisco, she had to make lots of decisions about technology. She felt unprepared. Now she sees local policymakers on the frontlines of a battle of resources and governance in an AI era, and is working to upskill them to make better decisions for their constituents. No degree in computer science required.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://drive.google.com/file/d/1liGqB7v7tob2TG9KQwPPN3tZet6OaOJF/view">Local Leadership in the Era of Artificial Intelligence and the Tech Oligarchy</a> by Hillary Ronen</li><li><a href="https://sfbos.org/supervisor-ronen-district-9">More on Hillary’s work as a Supervisor for SF</a></li><li><a href="https://48hills.org/2025/01/progressives-messaging-hard-choices-and-justice-the-48hills-interview/">Hillary Ronen on progressives, messaging, hard choices, and justice</a> — interview in 48Hills from January 2025</li><li>More about <a href="https://localprogress.org/">Local Progress</a></li><li><a href="https://localprogress.org/wp-content/uploads/2019/01/Confronting-Preemption.pdf">Confronting Preemption</a> — a short briefing by Local Progress</li><li><a href="https://statecourtreport.org/our-work/analysis-opinion/what-happens-when-state-and-local-laws-conflict">What Happens When State and Local Laws Conflict</a> — article on state-level preemption by State Court Report</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </content:encoded>
      <pubDate>Fri, 17 Oct 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/99826c90/e5750d93.mp3" length="84876840" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Fp7tbSEtmCO8L5DhcliYsbv-1i--8jk9XPUG8S4ZYUo/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81Mzhk/Y2ViMDA2MDk1MTE2/MDgyODVhNmZiOGNk/OTE2YS5wbmc.jpg"/>
      <itunes:duration>3534</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>What’s it like working as a local representative when you live next door to Silicon Valley?</p><p><strong>More like this: </strong><a href="https://saysmaybe.com/podcast/chasing-away-sidewalk-labs-w-bianca-wylie"><strong>Chasing Away Sidewalk Labs w/ Bianca Wylie</strong></a><strong><br></strong><br></p><p>When Hilary Ronen was on the board of supervisors for San Francisco, she had to make lots of decisions about technology. She felt unprepared. Now she sees local policymakers on the frontlines of a battle of resources and governance in an AI era, and is working to upskill them to make better decisions for their constituents. No degree in computer science required.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://drive.google.com/file/d/1liGqB7v7tob2TG9KQwPPN3tZet6OaOJF/view">Local Leadership in the Era of Artificial Intelligence and the Tech Oligarchy</a> by Hillary Ronen</li><li><a href="https://sfbos.org/supervisor-ronen-district-9">More on Hillary’s work as a Supervisor for SF</a></li><li><a href="https://48hills.org/2025/01/progressives-messaging-hard-choices-and-justice-the-48hills-interview/">Hillary Ronen on progressives, messaging, hard choices, and justice</a> — interview in 48Hills from January 2025</li><li>More about <a href="https://localprogress.org/">Local Progress</a></li><li><a href="https://localprogress.org/wp-content/uploads/2019/01/Confronting-Preemption.pdf">Confronting Preemption</a> — a short briefing by Local Progress</li><li><a href="https://statecourtreport.org/our-work/analysis-opinion/what-happens-when-state-and-local-laws-conflict">What Happens When State and Local Laws Conflict</a> — article on state-level preemption by State Court Report</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/99826c90/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Gotcha! Enshittification w/ Cory Doctorow</title>
      <itunes:episode>79</itunes:episode>
      <podcast:episode>79</podcast:episode>
      <itunes:title>Gotcha! Enshittification w/ Cory Doctorow</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">23f7d3c2-0159-467f-8bf1-ffe4bc2ff042</guid>
      <link>https://share.transistor.fm/s/5759f1f8</link>
      <description>
        <![CDATA[<p>Welcome to the final boss of scams in the age of technology: Enshittification </p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/nodestar-the-eternal-september-w-mike-masnick"><strong>Nodestar: The Eternal September w/ Mike Masnick</strong></a><strong><br></strong><br></p><p>This is our final episode of Gotcha! — our series on scams, how they work, and how technology both amplifies and obscures them. For this final instalment we have Cory Doctorow on to chat about his new book Enshittification.</p><p>Is platformisation essentially just an industrial level scam? We will deep-dive the enshittification playbook to understand how companies lock users into decaying platforms, and get away with it. Cory shares ideas on what we can do differently to turn tide. Listen to learn what a ‘chickenised reverse centaur’ is…</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Buy <a href="https://www.versobooks.com/en-gb/products/3341-enshittification?srsltid=AfmBOopKQUbNe8WPrNP7cI-JyC82xTf4w17uuP0c_ope0-hQ8vca9uHH">Enshittifcation</a> now from Verso Books!</li><li><a href="https://www.goodreads.com/book/show/211004856-picks-and-shovels">Picks and Shovels</a> by Cory Doctorow</li><li><a href="https://www.wnycstudios.org/podcasts/otm/projects/enshitification">On The Media</a> series on Enshittification</li><li><a href="https://pluralistic.net/">Pluralistic</a> — Daily Links and essays by Cory Doctorow</li><li><a href="https://pluralistic.net/2025/07/22/all-day-suckers/#i-love-the-poorly-educated">Conservatism Considered as a Movement of Bitter Rubes</a> — Cory on why conservatism creates a friendly environment for scams</li><li><a href="https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security">How I Got Scammed</a> — Cory on his personal experiences of being scammed</li><li><a href="https://craphound.com/">All of Cory’s books</a></li><li><a href="https://pluralistic.net/2025/09/02/act-locally/">All (Antitrust) Politics Are Local</a> — the entry to Pluralistic that Cory wrote on the day of recording</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Welcome to the final boss of scams in the age of technology: Enshittification </p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/nodestar-the-eternal-september-w-mike-masnick"><strong>Nodestar: The Eternal September w/ Mike Masnick</strong></a><strong><br></strong><br></p><p>This is our final episode of Gotcha! — our series on scams, how they work, and how technology both amplifies and obscures them. For this final instalment we have Cory Doctorow on to chat about his new book Enshittification.</p><p>Is platformisation essentially just an industrial level scam? We will deep-dive the enshittification playbook to understand how companies lock users into decaying platforms, and get away with it. Cory shares ideas on what we can do differently to turn tide. Listen to learn what a ‘chickenised reverse centaur’ is…</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Buy <a href="https://www.versobooks.com/en-gb/products/3341-enshittification?srsltid=AfmBOopKQUbNe8WPrNP7cI-JyC82xTf4w17uuP0c_ope0-hQ8vca9uHH">Enshittifcation</a> now from Verso Books!</li><li><a href="https://www.goodreads.com/book/show/211004856-picks-and-shovels">Picks and Shovels</a> by Cory Doctorow</li><li><a href="https://www.wnycstudios.org/podcasts/otm/projects/enshitification">On The Media</a> series on Enshittification</li><li><a href="https://pluralistic.net/">Pluralistic</a> — Daily Links and essays by Cory Doctorow</li><li><a href="https://pluralistic.net/2025/07/22/all-day-suckers/#i-love-the-poorly-educated">Conservatism Considered as a Movement of Bitter Rubes</a> — Cory on why conservatism creates a friendly environment for scams</li><li><a href="https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security">How I Got Scammed</a> — Cory on his personal experiences of being scammed</li><li><a href="https://craphound.com/">All of Cory’s books</a></li><li><a href="https://pluralistic.net/2025/09/02/act-locally/">All (Antitrust) Politics Are Local</a> — the entry to Pluralistic that Cory wrote on the day of recording</li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 10 Oct 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/5759f1f8/98a9705f.mp3" length="80470306" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/yNoHZCdmqr9LEqBQnzQ5QMq_RfVE9pj-spHQrihj6Tg/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYzQx/MzJlYjhiMDk4OTY4/OGI1YmViYWJlMjRj/NTQ0Ny5wbmc.jpg"/>
      <itunes:duration>3350</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Welcome to the final boss of scams in the age of technology: Enshittification </p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/nodestar-the-eternal-september-w-mike-masnick"><strong>Nodestar: The Eternal September w/ Mike Masnick</strong></a><strong><br></strong><br></p><p>This is our final episode of Gotcha! — our series on scams, how they work, and how technology both amplifies and obscures them. For this final instalment we have Cory Doctorow on to chat about his new book Enshittification.</p><p>Is platformisation essentially just an industrial level scam? We will deep-dive the enshittification playbook to understand how companies lock users into decaying platforms, and get away with it. Cory shares ideas on what we can do differently to turn tide. Listen to learn what a ‘chickenised reverse centaur’ is…</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Buy <a href="https://www.versobooks.com/en-gb/products/3341-enshittification?srsltid=AfmBOopKQUbNe8WPrNP7cI-JyC82xTf4w17uuP0c_ope0-hQ8vca9uHH">Enshittifcation</a> now from Verso Books!</li><li><a href="https://www.goodreads.com/book/show/211004856-picks-and-shovels">Picks and Shovels</a> by Cory Doctorow</li><li><a href="https://www.wnycstudios.org/podcasts/otm/projects/enshitification">On The Media</a> series on Enshittification</li><li><a href="https://pluralistic.net/">Pluralistic</a> — Daily Links and essays by Cory Doctorow</li><li><a href="https://pluralistic.net/2025/07/22/all-day-suckers/#i-love-the-poorly-educated">Conservatism Considered as a Movement of Bitter Rubes</a> — Cory on why conservatism creates a friendly environment for scams</li><li><a href="https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security">How I Got Scammed</a> — Cory on his personal experiences of being scammed</li><li><a href="https://craphound.com/">All of Cory’s books</a></li><li><a href="https://pluralistic.net/2025/09/02/act-locally/">All (Antitrust) Politics Are Local</a> — the entry to Pluralistic that Cory wrote on the day of recording</li></ul>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/5759f1f8/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Gotcha! ScamGPT w/ Lana Swartz &amp; Alice Marwick</title>
      <itunes:episode>78</itunes:episode>
      <podcast:episode>78</podcast:episode>
      <itunes:title>Gotcha! ScamGPT w/ Lana Swartz &amp; Alice Marwick</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">25b96292-6c60-4cb7-8c9e-f8969a4e4cb5</guid>
      <link>https://share.transistor.fm/s/fd447fc7</link>
      <description>
        <![CDATA[<p>Thought we were at peak scam? Well, ScamGPT just entered the chat.</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/gotcha-the-crypto-grift-w-mark-hays"><strong>Gotcha! The Crypto Grift w/ Mark Hays</strong></a><strong><br></strong><br></p><p>This is part three of Gotcha! — our series on scams, how they work, and how technology is supercharging them. This week Lana Swartz and Alice Marwick join Alix to discuss their primer on how generative AI is automating fraud.</p><p>We dig into the very human, very dark world of the scam industry, where the scammers are often being exploited in highly sophisticated human trafficking operations — and are now using generative AI to scale up and speed up.</p><p>We talk about how you probably aren’t going to get a deepfake call from a family member to demand a ransom, but the threats are still evolving in ways that are scary and until now largely unregulated. And as ever even though the problems are made worse by technology, we explore the limitations of technology and laws to stem the tide.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://datasociety.net/wp-content/uploads/2025/05/ScamGPT-GenAI-and-the-Automation-of-Fraud_final.pdf">Read the primer here</a>!</li><li>More about <a href="https://llaannaa.com/index.html">Lana Swartz</a></li><li>More about <a href="https://datasociety.net/people/marwick-alice/">Alice Marwick</a></li><li><a href="https://yalebooks.co.uk/book/9780300233223/new-money/">New Money</a> by Lana Swartz</li><li><a href="https://www.versobooks.com/en-gb/products/3234-scam">Scam: Inside Southeast Asia's Cybercrime Compounds</a> by Mark Bo, Ivan Franceschini, and Ling Li</li><li><a href="https://www.theguardian.com/global-development/2025/sep/08/myanmar-military-junta-scam-centres-trafficking-crime-syndicates-kk-park?CMP=Share_iOSApp_Other">Revealed: the huge growth of Myanmar scam centres that may hold 100,000 trafficked people</a></li><li><a href="https://www.aljazeera.com/video/true-crime-reports/2025/2/9/slave-scammers-true-crime-reports">Al Jazeera True Crime Report on scamming farms in South East Asia</a></li><li><a href="https://www.occrp.org/en/project/scam-empire">Scam Empire</a> project by the Organised Crime and Corruption Reporting Project</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Thought we were at peak scam? Well, ScamGPT just entered the chat.</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/gotcha-the-crypto-grift-w-mark-hays"><strong>Gotcha! The Crypto Grift w/ Mark Hays</strong></a><strong><br></strong><br></p><p>This is part three of Gotcha! — our series on scams, how they work, and how technology is supercharging them. This week Lana Swartz and Alice Marwick join Alix to discuss their primer on how generative AI is automating fraud.</p><p>We dig into the very human, very dark world of the scam industry, where the scammers are often being exploited in highly sophisticated human trafficking operations — and are now using generative AI to scale up and speed up.</p><p>We talk about how you probably aren’t going to get a deepfake call from a family member to demand a ransom, but the threats are still evolving in ways that are scary and until now largely unregulated. And as ever even though the problems are made worse by technology, we explore the limitations of technology and laws to stem the tide.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://datasociety.net/wp-content/uploads/2025/05/ScamGPT-GenAI-and-the-Automation-of-Fraud_final.pdf">Read the primer here</a>!</li><li>More about <a href="https://llaannaa.com/index.html">Lana Swartz</a></li><li>More about <a href="https://datasociety.net/people/marwick-alice/">Alice Marwick</a></li><li><a href="https://yalebooks.co.uk/book/9780300233223/new-money/">New Money</a> by Lana Swartz</li><li><a href="https://www.versobooks.com/en-gb/products/3234-scam">Scam: Inside Southeast Asia's Cybercrime Compounds</a> by Mark Bo, Ivan Franceschini, and Ling Li</li><li><a href="https://www.theguardian.com/global-development/2025/sep/08/myanmar-military-junta-scam-centres-trafficking-crime-syndicates-kk-park?CMP=Share_iOSApp_Other">Revealed: the huge growth of Myanmar scam centres that may hold 100,000 trafficked people</a></li><li><a href="https://www.aljazeera.com/video/true-crime-reports/2025/2/9/slave-scammers-true-crime-reports">Al Jazeera True Crime Report on scamming farms in South East Asia</a></li><li><a href="https://www.occrp.org/en/project/scam-empire">Scam Empire</a> project by the Organised Crime and Corruption Reporting Project</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </content:encoded>
      <pubDate>Fri, 03 Oct 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/fd447fc7/d75164c8.mp3" length="78544408" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/pVntW2NXo8_ua6tuQnc46QchsdhXBUQD3-uKFTx4Cck/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84Yjhh/N2ZhNzdkZTBlODBm/NWI5YTUxMDYwYzlm/ZGExMy5wbmc.jpg"/>
      <itunes:duration>3271</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Thought we were at peak scam? Well, ScamGPT just entered the chat.</p><p><strong>More like this: </strong><a href="https://themaybe.org/podcast/gotcha-the-crypto-grift-w-mark-hays"><strong>Gotcha! The Crypto Grift w/ Mark Hays</strong></a><strong><br></strong><br></p><p>This is part three of Gotcha! — our series on scams, how they work, and how technology is supercharging them. This week Lana Swartz and Alice Marwick join Alix to discuss their primer on how generative AI is automating fraud.</p><p>We dig into the very human, very dark world of the scam industry, where the scammers are often being exploited in highly sophisticated human trafficking operations — and are now using generative AI to scale up and speed up.</p><p>We talk about how you probably aren’t going to get a deepfake call from a family member to demand a ransom, but the threats are still evolving in ways that are scary and until now largely unregulated. And as ever even though the problems are made worse by technology, we explore the limitations of technology and laws to stem the tide.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://datasociety.net/wp-content/uploads/2025/05/ScamGPT-GenAI-and-the-Automation-of-Fraud_final.pdf">Read the primer here</a>!</li><li>More about <a href="https://llaannaa.com/index.html">Lana Swartz</a></li><li>More about <a href="https://datasociety.net/people/marwick-alice/">Alice Marwick</a></li><li><a href="https://yalebooks.co.uk/book/9780300233223/new-money/">New Money</a> by Lana Swartz</li><li><a href="https://www.versobooks.com/en-gb/products/3234-scam">Scam: Inside Southeast Asia's Cybercrime Compounds</a> by Mark Bo, Ivan Franceschini, and Ling Li</li><li><a href="https://www.theguardian.com/global-development/2025/sep/08/myanmar-military-junta-scam-centres-trafficking-crime-syndicates-kk-park?CMP=Share_iOSApp_Other">Revealed: the huge growth of Myanmar scam centres that may hold 100,000 trafficked people</a></li><li><a href="https://www.aljazeera.com/video/true-crime-reports/2025/2/9/slave-scammers-true-crime-reports">Al Jazeera True Crime Report on scamming farms in South East Asia</a></li><li><a href="https://www.occrp.org/en/project/scam-empire">Scam Empire</a> project by the Organised Crime and Corruption Reporting Project</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/fd447fc7/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>NYC Live: Let Them Eat Compute</title>
      <itunes:episode>77</itunes:episode>
      <podcast:episode>77</podcast:episode>
      <itunes:title>NYC Live: Let Them Eat Compute</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">618bd926-e1b0-40ee-ab70-85d73821e7dd</guid>
      <link>https://share.transistor.fm/s/3b7643a9</link>
      <description>
        <![CDATA[<p>This just in with data centers: Energy grids are strained, water is scarce, utility costs are through the roof — ah well, let them eat compute, I guess!</p><p><strong>More like this:</strong> <a href="https://www.themaybe.org/podcast/net-0-ai-thirst-in-a-water-scarce-world-w-julie-mccarthy">AI Thirst in a Water-Scarce World w/ Julie McCarthy</a></p><p>It was just climate week in NYC and we did a live show on data centers with four amazing guests from around the US…</p><p><em>Thank you to the Luminate Foundation for sponsoring this live show and for all of our NY-based friends, and network from around the world that made it to Brooklyn for a magical evening. You can also </em><a href="https://www.youtube.com/live/n1KS9yqbKfA?si=GC6lftXFgk53HZtj&amp;t=1200"><em>watch the live recording on Youtube</em></a><em>.</em></p><ul><li><strong>KeShaun Pearson</strong> (<a href="https://www.memphiscap.org/">Memphis Community Against Pollution</a>) will break down how Elon Musk’s xAI supercomputer is polluting the air of historically Black neighborhoods in Memphis, and how organizers are fighting back against yet another chapter of corporate extraction in their communities.</li><li><strong>KD Minor</strong> (<a href="https://www.all4energy.org/">Alliance for Affordable Energy</a>) will demystify the energy impacts of data centers in Louisiana and share organizing strategies to mobilize community opposition to Big Tech and Big Oil infrastructure.</li><li><strong>Marisol</strong> (<a href="https://www.nodesertdatacenter.com/">No Desert Data Center</a>) will talk about their grassroots coalition’s recent win in Tucson to stop Amazon’s Project Blue data center proposal, which threatened the city’s scarce water supply, and how they’re organizing for future protections.</li><li><strong>Amba Kak</strong> (<a href="https://ainowinstitute.org/">AI Now Institute</a>) will talk us through the bigger picture: what’s behind Big Tech’s AI data center expansion, who stands to benefit from this boom, and what we sacrifice in return.</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://azluminaria.org/2025/07/21/amazon-web-services-is-company-behind-tucsons-project-blue-according-to-2023-county-memo/">Amazon Web Services is company behind Tucson’s Project Blue, according to 2023 county memo</a> — from Luminaria</li><li><a href="https://azluminaria.org/2025/09/09/following-pima-county-tucson-moves-to-create-new-policy-for-ndas/">Tuscon to create new policies around NDAs following the councils regret around not knowing more about Project Blue</a> — from Luminaria</li><li><a href="https://www.kgun9.com/news/community-inspired-journalism/marana/maranas-data-center-ordinance">How Marana, also in the Tuscon area, employed an ordinance to regulate water usage</a> after learning about data center interest in the area.</li><li><a href="https://www.mlgw.com/images/content/files/pdf/new/5-5-25%20xAI%20Update.pdf">xAI has requested an additional 150MGW of power for Colossus in Memphis</a>, bring it to a total of 300MGW</li><li><a href="https://time.com/7308925/elon-musk-memphis-ai-data-center/">Time reports on increase in nitrogen dioxide pollution around Memphis due to xAI turbines</a></li><li><a href="https://www.democracynow.org/2025/4/25/elon_musk_xai_memphis_tennessee">Keshaun and Justin Pearson on Democracy Now</a> discussing xAI’s human rights violations</li><li><a href="https://www.all4energy.org/watchdog/metas-mega-data-center-could-strain-la-grid/">Meta’s Mega Data Center Could Strain Louisiana’s Grid — and Entergy Isn’t Prepared</a> — report by the Alliance for Affordable Energy</li><li><a href="https://www.404media.co/a-black-hole-of-energy-use-metas-massive-ai-data-center-is-stressing-out-a-louisiana-community/?ref=daily-stories-newsletter">'A Black Hole of Energy Use': Meta's Massive AI Data Center Is Stressing Out a Louisiana Community</a> — 404 Media</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This just in with data centers: Energy grids are strained, water is scarce, utility costs are through the roof — ah well, let them eat compute, I guess!</p><p><strong>More like this:</strong> <a href="https://www.themaybe.org/podcast/net-0-ai-thirst-in-a-water-scarce-world-w-julie-mccarthy">AI Thirst in a Water-Scarce World w/ Julie McCarthy</a></p><p>It was just climate week in NYC and we did a live show on data centers with four amazing guests from around the US…</p><p><em>Thank you to the Luminate Foundation for sponsoring this live show and for all of our NY-based friends, and network from around the world that made it to Brooklyn for a magical evening. You can also </em><a href="https://www.youtube.com/live/n1KS9yqbKfA?si=GC6lftXFgk53HZtj&amp;t=1200"><em>watch the live recording on Youtube</em></a><em>.</em></p><ul><li><strong>KeShaun Pearson</strong> (<a href="https://www.memphiscap.org/">Memphis Community Against Pollution</a>) will break down how Elon Musk’s xAI supercomputer is polluting the air of historically Black neighborhoods in Memphis, and how organizers are fighting back against yet another chapter of corporate extraction in their communities.</li><li><strong>KD Minor</strong> (<a href="https://www.all4energy.org/">Alliance for Affordable Energy</a>) will demystify the energy impacts of data centers in Louisiana and share organizing strategies to mobilize community opposition to Big Tech and Big Oil infrastructure.</li><li><strong>Marisol</strong> (<a href="https://www.nodesertdatacenter.com/">No Desert Data Center</a>) will talk about their grassroots coalition’s recent win in Tucson to stop Amazon’s Project Blue data center proposal, which threatened the city’s scarce water supply, and how they’re organizing for future protections.</li><li><strong>Amba Kak</strong> (<a href="https://ainowinstitute.org/">AI Now Institute</a>) will talk us through the bigger picture: what’s behind Big Tech’s AI data center expansion, who stands to benefit from this boom, and what we sacrifice in return.</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://azluminaria.org/2025/07/21/amazon-web-services-is-company-behind-tucsons-project-blue-according-to-2023-county-memo/">Amazon Web Services is company behind Tucson’s Project Blue, according to 2023 county memo</a> — from Luminaria</li><li><a href="https://azluminaria.org/2025/09/09/following-pima-county-tucson-moves-to-create-new-policy-for-ndas/">Tuscon to create new policies around NDAs following the councils regret around not knowing more about Project Blue</a> — from Luminaria</li><li><a href="https://www.kgun9.com/news/community-inspired-journalism/marana/maranas-data-center-ordinance">How Marana, also in the Tuscon area, employed an ordinance to regulate water usage</a> after learning about data center interest in the area.</li><li><a href="https://www.mlgw.com/images/content/files/pdf/new/5-5-25%20xAI%20Update.pdf">xAI has requested an additional 150MGW of power for Colossus in Memphis</a>, bring it to a total of 300MGW</li><li><a href="https://time.com/7308925/elon-musk-memphis-ai-data-center/">Time reports on increase in nitrogen dioxide pollution around Memphis due to xAI turbines</a></li><li><a href="https://www.democracynow.org/2025/4/25/elon_musk_xai_memphis_tennessee">Keshaun and Justin Pearson on Democracy Now</a> discussing xAI’s human rights violations</li><li><a href="https://www.all4energy.org/watchdog/metas-mega-data-center-could-strain-la-grid/">Meta’s Mega Data Center Could Strain Louisiana’s Grid — and Entergy Isn’t Prepared</a> — report by the Alliance for Affordable Energy</li><li><a href="https://www.404media.co/a-black-hole-of-energy-use-metas-massive-ai-data-center-is-stressing-out-a-louisiana-community/?ref=daily-stories-newsletter">'A Black Hole of Energy Use': Meta's Massive AI Data Center Is Stressing Out a Louisiana Community</a> — 404 Media</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </content:encoded>
      <pubDate>Thu, 02 Oct 2025 10:21:38 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/3b7643a9/bfccdcda.mp3" length="75762066" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/yFkzBteL76aXsxr2B8Vm__AFXd2iV0U_cXeBCd5kIo0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83NWIz/MmYyYjhiN2UzMGFj/NzQ5NjkzYmMzM2Fi/YzljNC5wbmc.jpg"/>
      <itunes:duration>3154</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This just in with data centers: Energy grids are strained, water is scarce, utility costs are through the roof — ah well, let them eat compute, I guess!</p><p><strong>More like this:</strong> <a href="https://www.themaybe.org/podcast/net-0-ai-thirst-in-a-water-scarce-world-w-julie-mccarthy">AI Thirst in a Water-Scarce World w/ Julie McCarthy</a></p><p>It was just climate week in NYC and we did a live show on data centers with four amazing guests from around the US…</p><p><em>Thank you to the Luminate Foundation for sponsoring this live show and for all of our NY-based friends, and network from around the world that made it to Brooklyn for a magical evening. You can also </em><a href="https://www.youtube.com/live/n1KS9yqbKfA?si=GC6lftXFgk53HZtj&amp;t=1200"><em>watch the live recording on Youtube</em></a><em>.</em></p><ul><li><strong>KeShaun Pearson</strong> (<a href="https://www.memphiscap.org/">Memphis Community Against Pollution</a>) will break down how Elon Musk’s xAI supercomputer is polluting the air of historically Black neighborhoods in Memphis, and how organizers are fighting back against yet another chapter of corporate extraction in their communities.</li><li><strong>KD Minor</strong> (<a href="https://www.all4energy.org/">Alliance for Affordable Energy</a>) will demystify the energy impacts of data centers in Louisiana and share organizing strategies to mobilize community opposition to Big Tech and Big Oil infrastructure.</li><li><strong>Marisol</strong> (<a href="https://www.nodesertdatacenter.com/">No Desert Data Center</a>) will talk about their grassroots coalition’s recent win in Tucson to stop Amazon’s Project Blue data center proposal, which threatened the city’s scarce water supply, and how they’re organizing for future protections.</li><li><strong>Amba Kak</strong> (<a href="https://ainowinstitute.org/">AI Now Institute</a>) will talk us through the bigger picture: what’s behind Big Tech’s AI data center expansion, who stands to benefit from this boom, and what we sacrifice in return.</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://azluminaria.org/2025/07/21/amazon-web-services-is-company-behind-tucsons-project-blue-according-to-2023-county-memo/">Amazon Web Services is company behind Tucson’s Project Blue, according to 2023 county memo</a> — from Luminaria</li><li><a href="https://azluminaria.org/2025/09/09/following-pima-county-tucson-moves-to-create-new-policy-for-ndas/">Tuscon to create new policies around NDAs following the councils regret around not knowing more about Project Blue</a> — from Luminaria</li><li><a href="https://www.kgun9.com/news/community-inspired-journalism/marana/maranas-data-center-ordinance">How Marana, also in the Tuscon area, employed an ordinance to regulate water usage</a> after learning about data center interest in the area.</li><li><a href="https://www.mlgw.com/images/content/files/pdf/new/5-5-25%20xAI%20Update.pdf">xAI has requested an additional 150MGW of power for Colossus in Memphis</a>, bring it to a total of 300MGW</li><li><a href="https://time.com/7308925/elon-musk-memphis-ai-data-center/">Time reports on increase in nitrogen dioxide pollution around Memphis due to xAI turbines</a></li><li><a href="https://www.democracynow.org/2025/4/25/elon_musk_xai_memphis_tennessee">Keshaun and Justin Pearson on Democracy Now</a> discussing xAI’s human rights violations</li><li><a href="https://www.all4energy.org/watchdog/metas-mega-data-center-could-strain-la-grid/">Meta’s Mega Data Center Could Strain Louisiana’s Grid — and Entergy Isn’t Prepared</a> — report by the Alliance for Affordable Energy</li><li><a href="https://www.404media.co/a-black-hole-of-energy-use-metas-massive-ai-data-center-is-stressing-out-a-louisiana-community/?ref=daily-stories-newsletter">'A Black Hole of Energy Use': Meta's Massive AI Data Center Is Stressing Out a Louisiana Community</a> — 404 Media</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/3b7643a9/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Are AI Companies Cooking the Books? w/ Sarah Myers West</title>
      <itunes:episode>76</itunes:episode>
      <podcast:episode>76</podcast:episode>
      <itunes:title>Are AI Companies Cooking the Books? w/ Sarah Myers West</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">43f731be-2c9a-40ce-b65a-30123466a038</guid>
      <link>https://share.transistor.fm/s/080499c0</link>
      <description>
        <![CDATA[<p>OpenAI just secured a bizarre financial deal with Nvidia — but the math is not mathing. Is the AI sector an actual market, or a series of high-profile announcements of circular relationships between a tiny number of companies?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/making-myths-to-make-money-w-ai-now"><strong>Making Myths to Make Money w/ AI Now</strong></a><strong><br></strong><br></p><p>Alix sat down with Sarah Myers-West to go through the particulars of this deal, and other similar deals that are propping up AI’s industry of vapour. This is not your traditional bubble that’s about to burst — there is no bubble, it’s just that The New Normal is to pour debt into an industry that cannot promise any returns…</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.cnbc.com/2025/09/26/nvidias-investment-portfolio.html">More on the Nvidia OpenAI deal</a> — CNBC</li><li><a href="https://www.insidermonkey.com/blog/analyst-says-market-concerns-after-nvidia-nvda-openai-deal-are-wrong-1618504/">Analysts refer to deal as ‘vendor financing’</a> — Insider Monkey</li><li><a href="https://www.wsj.com/tech/ai/ai-bubble-building-spree-55ee6128?st=efV1EF&amp;reflink=article_email_share">Spending on AI is at Epic Levels. Will it Ever Pay Off?</a> — WSJ</li><li><a href="https://www.reuters.com/business/media-telecom/openai-oracle-softbank-plan-five-new-ai-data-centers-500-billion-stargate-2025-09-23/">OpenAI, Softbank, and Oracle spending $500bn on data centre expansion in Abilene</a> — Reuters</li><li>How Larry Ellison used the <a href="https://www.ft.com/content/dfb2b9b7-b229-4f65-9611-2d879bd326b9">AI boom</a> and the <a href="https://www.lighthousereports.com/investigation/blair-and-the-billionaire/">Tony Blair Institute</a> to bolster his wealth</li><li><a href="https://www.theregister.com/2025/09/25/oracle_18_billion_debt/">Oracle funding Open AI data centers with heaps of debt</a> and <a href="https://www.theregister.com/2025/09/25/oracle_18_billion_debt/">will have to borrow at least $25bn a year</a> — The Register</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>OpenAI just secured a bizarre financial deal with Nvidia — but the math is not mathing. Is the AI sector an actual market, or a series of high-profile announcements of circular relationships between a tiny number of companies?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/making-myths-to-make-money-w-ai-now"><strong>Making Myths to Make Money w/ AI Now</strong></a><strong><br></strong><br></p><p>Alix sat down with Sarah Myers-West to go through the particulars of this deal, and other similar deals that are propping up AI’s industry of vapour. This is not your traditional bubble that’s about to burst — there is no bubble, it’s just that The New Normal is to pour debt into an industry that cannot promise any returns…</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.cnbc.com/2025/09/26/nvidias-investment-portfolio.html">More on the Nvidia OpenAI deal</a> — CNBC</li><li><a href="https://www.insidermonkey.com/blog/analyst-says-market-concerns-after-nvidia-nvda-openai-deal-are-wrong-1618504/">Analysts refer to deal as ‘vendor financing’</a> — Insider Monkey</li><li><a href="https://www.wsj.com/tech/ai/ai-bubble-building-spree-55ee6128?st=efV1EF&amp;reflink=article_email_share">Spending on AI is at Epic Levels. Will it Ever Pay Off?</a> — WSJ</li><li><a href="https://www.reuters.com/business/media-telecom/openai-oracle-softbank-plan-five-new-ai-data-centers-500-billion-stargate-2025-09-23/">OpenAI, Softbank, and Oracle spending $500bn on data centre expansion in Abilene</a> — Reuters</li><li>How Larry Ellison used the <a href="https://www.ft.com/content/dfb2b9b7-b229-4f65-9611-2d879bd326b9">AI boom</a> and the <a href="https://www.lighthousereports.com/investigation/blair-and-the-billionaire/">Tony Blair Institute</a> to bolster his wealth</li><li><a href="https://www.theregister.com/2025/09/25/oracle_18_billion_debt/">Oracle funding Open AI data centers with heaps of debt</a> and <a href="https://www.theregister.com/2025/09/25/oracle_18_billion_debt/">will have to borrow at least $25bn a year</a> — The Register</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </content:encoded>
      <pubDate>Tue, 30 Sep 2025 19:30:04 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/080499c0/3c2ca8a0.mp3" length="57506325" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/8O9sFzfFrAI03noio_GEmALqfsX82yJHPyYttBdTx-Y/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kYzk4/MzY1NjlmZGYyYTdi/MDRmOTgzNDJkOWEw/ZGYxZi5wbmc.jpg"/>
      <itunes:duration>2394</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>OpenAI just secured a bizarre financial deal with Nvidia — but the math is not mathing. Is the AI sector an actual market, or a series of high-profile announcements of circular relationships between a tiny number of companies?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/making-myths-to-make-money-w-ai-now"><strong>Making Myths to Make Money w/ AI Now</strong></a><strong><br></strong><br></p><p>Alix sat down with Sarah Myers-West to go through the particulars of this deal, and other similar deals that are propping up AI’s industry of vapour. This is not your traditional bubble that’s about to burst — there is no bubble, it’s just that The New Normal is to pour debt into an industry that cannot promise any returns…</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.cnbc.com/2025/09/26/nvidias-investment-portfolio.html">More on the Nvidia OpenAI deal</a> — CNBC</li><li><a href="https://www.insidermonkey.com/blog/analyst-says-market-concerns-after-nvidia-nvda-openai-deal-are-wrong-1618504/">Analysts refer to deal as ‘vendor financing’</a> — Insider Monkey</li><li><a href="https://www.wsj.com/tech/ai/ai-bubble-building-spree-55ee6128?st=efV1EF&amp;reflink=article_email_share">Spending on AI is at Epic Levels. Will it Ever Pay Off?</a> — WSJ</li><li><a href="https://www.reuters.com/business/media-telecom/openai-oracle-softbank-plan-five-new-ai-data-centers-500-billion-stargate-2025-09-23/">OpenAI, Softbank, and Oracle spending $500bn on data centre expansion in Abilene</a> — Reuters</li><li>How Larry Ellison used the <a href="https://www.ft.com/content/dfb2b9b7-b229-4f65-9611-2d879bd326b9">AI boom</a> and the <a href="https://www.lighthousereports.com/investigation/blair-and-the-billionaire/">Tony Blair Institute</a> to bolster his wealth</li><li><a href="https://www.theregister.com/2025/09/25/oracle_18_billion_debt/">Oracle funding Open AI data centers with heaps of debt</a> and <a href="https://www.theregister.com/2025/09/25/oracle_18_billion_debt/">will have to borrow at least $25bn a year</a> — The Register</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/080499c0/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Gotcha! How MLMs Ate the Economy w/ Bridget Read</title>
      <itunes:episode>75</itunes:episode>
      <podcast:episode>75</podcast:episode>
      <itunes:title>Gotcha! How MLMs Ate the Economy w/ Bridget Read</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3b4e2406-8935-485e-9f49-a8e1bd2b7a25</guid>
      <link>https://share.transistor.fm/s/bedaf166</link>
      <description>
        <![CDATA[<p>Multi-level marketing schemes have built an empire by enticing people with promises of self-realisation and economic freedom. The cost is simple: exploit and be exploited.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/worker-power-big-tech-bossmen-w-david-seligman"><strong>Worker Power &amp; Big Tech Bossmen w/ David Seligman</strong></a><strong><br></strong><br></p><p>This is part two of <em>Gotcha!</em> Our series on scams, how they work, and how technology is super-charging them. This week Bridget Read came to Alix with a very exciting business opportunity. Bridget authored Little Bosses Everywhere — a book on the history of MLM.</p><p>We explore how door-to-door sales in the mid 20th century US took on the business model of a ponzi scheme, and transformed the sweaty salesman into an entrepreneurial recruiter with a downline.</p><p>MLM originators were part of a coordinated plan to challenge the new deal in lieu of radical free enterprise, where the only thing holding you back is yourself, and the economy consists solely of consumers selling to each other in a market of speculation. The secret is, no one is selling a product — they’re selling a way of life.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Buy Bridget’s book: <a href="https://www.penguinrandomhouse.com/books/715421/little-bosses-everywhere-by-bridget-read/">Little Bosses Everywhere: How the Pyramid Scheme Shaped America</a></li><li><a href="https://press.princeton.edu/books/paperback/9781935408345/family-values">Family Values</a> by Melinda Cooper</li><li><a href="https://www.bbc.co.uk/programmes/p07nkd84">The Missing Crypto Queen</a>: a podcast by BBC Sounds, about a large scale crypto scam, where there wasn’t even any crypto</li><li><a href="https://www.forbes.com/sites/risasarachan/2021/12/13/the-rise-and-fall-of-lularoe-investigates-scandal-behind--marketing-company/">LuLaRoe</a> — the pyramid scheme that tricked American mums into selling cheap clothes to their friends and family with the promise of financial independence.</li><li><a href="https://www.linkedin.com/pulse/my-experience-being-pyramid-scheme-amway-darren-mudd-ggf8c/">My Experience of Being in a Pyramid Scheme (Amway)</a> — a personal account by Darren Mudd on LinkedIn</li><li><a href="https://www.youtube.com/live/n1KS9yqbKfA?si=GC6lftXFgk53HZtj&amp;t=1200">Watch our recent live show at NYC Climate Week</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Multi-level marketing schemes have built an empire by enticing people with promises of self-realisation and economic freedom. The cost is simple: exploit and be exploited.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/worker-power-big-tech-bossmen-w-david-seligman"><strong>Worker Power &amp; Big Tech Bossmen w/ David Seligman</strong></a><strong><br></strong><br></p><p>This is part two of <em>Gotcha!</em> Our series on scams, how they work, and how technology is super-charging them. This week Bridget Read came to Alix with a very exciting business opportunity. Bridget authored Little Bosses Everywhere — a book on the history of MLM.</p><p>We explore how door-to-door sales in the mid 20th century US took on the business model of a ponzi scheme, and transformed the sweaty salesman into an entrepreneurial recruiter with a downline.</p><p>MLM originators were part of a coordinated plan to challenge the new deal in lieu of radical free enterprise, where the only thing holding you back is yourself, and the economy consists solely of consumers selling to each other in a market of speculation. The secret is, no one is selling a product — they’re selling a way of life.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Buy Bridget’s book: <a href="https://www.penguinrandomhouse.com/books/715421/little-bosses-everywhere-by-bridget-read/">Little Bosses Everywhere: How the Pyramid Scheme Shaped America</a></li><li><a href="https://press.princeton.edu/books/paperback/9781935408345/family-values">Family Values</a> by Melinda Cooper</li><li><a href="https://www.bbc.co.uk/programmes/p07nkd84">The Missing Crypto Queen</a>: a podcast by BBC Sounds, about a large scale crypto scam, where there wasn’t even any crypto</li><li><a href="https://www.forbes.com/sites/risasarachan/2021/12/13/the-rise-and-fall-of-lularoe-investigates-scandal-behind--marketing-company/">LuLaRoe</a> — the pyramid scheme that tricked American mums into selling cheap clothes to their friends and family with the promise of financial independence.</li><li><a href="https://www.linkedin.com/pulse/my-experience-being-pyramid-scheme-amway-darren-mudd-ggf8c/">My Experience of Being in a Pyramid Scheme (Amway)</a> — a personal account by Darren Mudd on LinkedIn</li><li><a href="https://www.youtube.com/live/n1KS9yqbKfA?si=GC6lftXFgk53HZtj&amp;t=1200">Watch our recent live show at NYC Climate Week</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</p>]]>
      </content:encoded>
      <pubDate>Fri, 26 Sep 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/bedaf166/58289000.mp3" length="72300863" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/HRj-8oRVI-CTzaFOQkgonoOxV2j1AJlscCoHe028NRI/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lNzYz/ZmFmNjlhYTlhZDlh/YzQ0YzA1NmIzZmU1/NjQ1Ny5wbmc.jpg"/>
      <itunes:duration>3011</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Multi-level marketing schemes have built an empire by enticing people with promises of self-realisation and economic freedom. The cost is simple: exploit and be exploited.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/worker-power-big-tech-bossmen-w-david-seligman"><strong>Worker Power &amp; Big Tech Bossmen w/ David Seligman</strong></a><strong><br></strong><br></p><p>This is part two of <em>Gotcha!</em> Our series on scams, how they work, and how technology is super-charging them. This week Bridget Read came to Alix with a very exciting business opportunity. Bridget authored Little Bosses Everywhere — a book on the history of MLM.</p><p>We explore how door-to-door sales in the mid 20th century US took on the business model of a ponzi scheme, and transformed the sweaty salesman into an entrepreneurial recruiter with a downline.</p><p>MLM originators were part of a coordinated plan to challenge the new deal in lieu of radical free enterprise, where the only thing holding you back is yourself, and the economy consists solely of consumers selling to each other in a market of speculation. The secret is, no one is selling a product — they’re selling a way of life.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Buy Bridget’s book: <a href="https://www.penguinrandomhouse.com/books/715421/little-bosses-everywhere-by-bridget-read/">Little Bosses Everywhere: How the Pyramid Scheme Shaped America</a></li><li><a href="https://press.princeton.edu/books/paperback/9781935408345/family-values">Family Values</a> by Melinda Cooper</li><li><a href="https://www.bbc.co.uk/programmes/p07nkd84">The Missing Crypto Queen</a>: a podcast by BBC Sounds, about a large scale crypto scam, where there wasn’t even any crypto</li><li><a href="https://www.forbes.com/sites/risasarachan/2021/12/13/the-rise-and-fall-of-lularoe-investigates-scandal-behind--marketing-company/">LuLaRoe</a> — the pyramid scheme that tricked American mums into selling cheap clothes to their friends and family with the promise of financial independence.</li><li><a href="https://www.linkedin.com/pulse/my-experience-being-pyramid-scheme-amway-darren-mudd-ggf8c/">My Experience of Being in a Pyramid Scheme (Amway)</a> — a personal account by Darren Mudd on LinkedIn</li><li><a href="https://www.youtube.com/live/n1KS9yqbKfA?si=GC6lftXFgk53HZtj&amp;t=1200">Watch our recent live show at NYC Climate Week</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/bedaf166/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Gotcha! The Crypto Grift w/ Mark Hays</title>
      <itunes:episode>74</itunes:episode>
      <podcast:episode>74</podcast:episode>
      <itunes:title>Gotcha! The Crypto Grift w/ Mark Hays</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5b9b4bf1-fc06-4a48-bff1-e5c502ba0f73</guid>
      <link>https://share.transistor.fm/s/3808184f</link>
      <description>
        <![CDATA[<p>Hey you! Do you want some free internet money? If this sounds too good to be true, that’s because it is!</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/making-myths-to-make-money-w-ai-now"><strong>Making Myths to Make Money w/ AI Now</strong></a><strong><br></strong><br></p><p>This is Gotcha! A four-part series on scams, how they work, and how technology is supercharging them. We start with Mark Hays from <a href="https://ourfinancialsecurity.org/">Americans for Financial Reform</a> (AFR), and get into one of the biggest tech-fuelled financial scams out there: cryptocurrencies.</p><p>Like many things that require mass-buy in, crypto started with an ideology (libertarianism, people hating on Wall Street post 2008). But where does that leave us now? What has crypto morphed into since then, and how does it deceive both consumers and regulators into thinking it’s something that it’s not?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://files.libcom.org/files/Seeing%20Like%20a%20State%20-%20James%20C.%20Scott.pdf">Seeing Like a State</a> by James C. Scott</li><li><a href="https://www.hup.harvard.edu/books/9780674244771">Capital Without Borders</a> by Brooke Harrington</li><li><a href="https://www.upress.umn.edu/9781517901806/the-politics-of-bitcoin/">The Politics of Bitcoin</a> by David Golumbia</li><li>Learn more about <a href="https://ourfinancialsecurity.org/">Americans for Financial Reform</a></li><li>Check out <a href="https://www.web3isgoinggreat.com/">Web3 Is Going Great</a> by Molly White</li><li><a href="https://www.youtube.com/watch?v=YQ_xWvX1n9g&amp;t=5880s">Line Goes Up</a> by Folding Ideas — an excellent survey of all the tactics and rug-pulls during the height of the NFT boom</li><li><a href="https://www.bbc.co.uk/programmes/p07nkd84">The Missing Crypto Queen</a>: a podcast by BBC Sounds, about a large scale crypto scam, where there wasn’t even any crypto</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Hey you! Do you want some free internet money? If this sounds too good to be true, that’s because it is!</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/making-myths-to-make-money-w-ai-now"><strong>Making Myths to Make Money w/ AI Now</strong></a><strong><br></strong><br></p><p>This is Gotcha! A four-part series on scams, how they work, and how technology is supercharging them. We start with Mark Hays from <a href="https://ourfinancialsecurity.org/">Americans for Financial Reform</a> (AFR), and get into one of the biggest tech-fuelled financial scams out there: cryptocurrencies.</p><p>Like many things that require mass-buy in, crypto started with an ideology (libertarianism, people hating on Wall Street post 2008). But where does that leave us now? What has crypto morphed into since then, and how does it deceive both consumers and regulators into thinking it’s something that it’s not?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://files.libcom.org/files/Seeing%20Like%20a%20State%20-%20James%20C.%20Scott.pdf">Seeing Like a State</a> by James C. Scott</li><li><a href="https://www.hup.harvard.edu/books/9780674244771">Capital Without Borders</a> by Brooke Harrington</li><li><a href="https://www.upress.umn.edu/9781517901806/the-politics-of-bitcoin/">The Politics of Bitcoin</a> by David Golumbia</li><li>Learn more about <a href="https://ourfinancialsecurity.org/">Americans for Financial Reform</a></li><li>Check out <a href="https://www.web3isgoinggreat.com/">Web3 Is Going Great</a> by Molly White</li><li><a href="https://www.youtube.com/watch?v=YQ_xWvX1n9g&amp;t=5880s">Line Goes Up</a> by Folding Ideas — an excellent survey of all the tactics and rug-pulls during the height of the NFT boom</li><li><a href="https://www.bbc.co.uk/programmes/p07nkd84">The Missing Crypto Queen</a>: a podcast by BBC Sounds, about a large scale crypto scam, where there wasn’t even any crypto</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </content:encoded>
      <pubDate>Fri, 19 Sep 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/3808184f/b8d8e1e8.mp3" length="80030288" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/ORr9QlkL8hJ_frMzCkent5AH3gEGa4A8buSXfIJRENQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lN2Mx/MjI1MmMyNjUzNDU3/ZTEyYzkwMjE4ZTll/Yjc5My5wbmc.jpg"/>
      <itunes:duration>3333</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Hey you! Do you want some free internet money? If this sounds too good to be true, that’s because it is!</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/making-myths-to-make-money-w-ai-now"><strong>Making Myths to Make Money w/ AI Now</strong></a><strong><br></strong><br></p><p>This is Gotcha! A four-part series on scams, how they work, and how technology is supercharging them. We start with Mark Hays from <a href="https://ourfinancialsecurity.org/">Americans for Financial Reform</a> (AFR), and get into one of the biggest tech-fuelled financial scams out there: cryptocurrencies.</p><p>Like many things that require mass-buy in, crypto started with an ideology (libertarianism, people hating on Wall Street post 2008). But where does that leave us now? What has crypto morphed into since then, and how does it deceive both consumers and regulators into thinking it’s something that it’s not?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://files.libcom.org/files/Seeing%20Like%20a%20State%20-%20James%20C.%20Scott.pdf">Seeing Like a State</a> by James C. Scott</li><li><a href="https://www.hup.harvard.edu/books/9780674244771">Capital Without Borders</a> by Brooke Harrington</li><li><a href="https://www.upress.umn.edu/9781517901806/the-politics-of-bitcoin/">The Politics of Bitcoin</a> by David Golumbia</li><li>Learn more about <a href="https://ourfinancialsecurity.org/">Americans for Financial Reform</a></li><li>Check out <a href="https://www.web3isgoinggreat.com/">Web3 Is Going Great</a> by Molly White</li><li><a href="https://www.youtube.com/watch?v=YQ_xWvX1n9g&amp;t=5880s">Line Goes Up</a> by Folding Ideas — an excellent survey of all the tactics and rug-pulls during the height of the NFT boom</li><li><a href="https://www.bbc.co.uk/programmes/p07nkd84">The Missing Crypto Queen</a>: a podcast by BBC Sounds, about a large scale crypto scam, where there wasn’t even any crypto</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/3808184f/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Gotcha!</title>
      <itunes:episode>73</itunes:episode>
      <podcast:episode>73</podcast:episode>
      <itunes:title>Gotcha!</itunes:title>
      <itunes:episodeType>trailer</itunes:episodeType>
      <guid isPermaLink="false">cb0af4fd-1e28-49a5-8afc-7a08a8e7a3d3</guid>
      <link>https://share.transistor.fm/s/81ca8d5b</link>
      <description>
        <![CDATA[<p>Gotcha! is a four-part series on scams, how they work, and how technology is supercharging them — running through to October 10.</p><p>In the series we look at:</p><ol><li>Crypto: Mark Hays on how a thing touting financial freedom ended up being a kind of fin-cult, rife with scamming</li><li>Multi-Level Marketing : Bridget Read on the history of the biggest and most successful type of scam that still plagues us today</li><li>Generative AI: Data &amp; Society’s primer on how generative AI is juicing the scam industrial complex</li><li>Enshittification: Cory Doctorow on his upcoming book, and how the process of Enchittification represents user-hostile practices that scam people into paying more, and ecosystem lock-in</li></ol>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Gotcha! is a four-part series on scams, how they work, and how technology is supercharging them — running through to October 10.</p><p>In the series we look at:</p><ol><li>Crypto: Mark Hays on how a thing touting financial freedom ended up being a kind of fin-cult, rife with scamming</li><li>Multi-Level Marketing : Bridget Read on the history of the biggest and most successful type of scam that still plagues us today</li><li>Generative AI: Data &amp; Society’s primer on how generative AI is juicing the scam industrial complex</li><li>Enshittification: Cory Doctorow on his upcoming book, and how the process of Enchittification represents user-hostile practices that scam people into paying more, and ecosystem lock-in</li></ol>]]>
      </content:encoded>
      <pubDate>Thu, 18 Sep 2025 14:18:48 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/81ca8d5b/2613a4aa.mp3" length="2171017" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/r2DEokR5VwCsAshlXlH_jpnU7a5jcI0J3ds8wOP72Tw/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xOTcz/YTkzNmU1ZjYyZTA1/MTc1ZDA5ZjkyNTI5/MjllYS5wbmc.jpg"/>
      <itunes:duration>88</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Gotcha! is a four-part series on scams, how they work, and how technology is supercharging them — running through to October 10.</p><p>In the series we look at:</p><ol><li>Crypto: Mark Hays on how a thing touting financial freedom ended up being a kind of fin-cult, rife with scamming</li><li>Multi-Level Marketing : Bridget Read on the history of the biggest and most successful type of scam that still plagues us today</li><li>Generative AI: Data &amp; Society’s primer on how generative AI is juicing the scam industrial complex</li><li>Enshittification: Cory Doctorow on his upcoming book, and how the process of Enchittification represents user-hostile practices that scam people into paying more, and ecosystem lock-in</li></ol>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/81ca8d5b/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Nodestar: Turning Networks into Knowledge w/ Andrew Trask</title>
      <itunes:episode>72</itunes:episode>
      <podcast:episode>72</podcast:episode>
      <itunes:title>Nodestar: Turning Networks into Knowledge w/ Andrew Trask</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5351bcc4-670a-4faa-9011-349777cfe5b4</guid>
      <link>https://share.transistor.fm/s/a350ffdc</link>
      <description>
        <![CDATA[<p>What if you could listen to multiple people at once, and actually understand them?</p><p><strong>More like this:</strong> <a href="https://www.themaybe.org/podcast/the-age-of-noise-w-eryk-salvaggio">**The Age of Noise w/ Eryk Salvaggio</a>**</p><p>In our final instalment (for now!) of Nodestar, Andrew Trask shares his vision for a world where we can assembly understanding from data everywhere. But not in a way that requires corporate control of our world.</p><p>If broadcasting is the act of talking to multiple people at once — what about broad <em>listening</em>? Where you listen to multiple sources of information, and actually learn something, without trampling over the control that individuals have over who sees what, when.</p><p>Andrew says that broad listening is difficult to achieve because of three huge problems: information overload, privacy, and veracity — and we are outsourcing these problems to central authorities, who abuse their power in deciding how to relay information to the public. What is Andrew doing at OpenMined to remedy this? Building protocols that decentralise access to training data for model development, obviously.</p><p><strong>Further Reading &amp; Resources</strong></p><ul><li><a href="https://internetat50.com/references/Licklider_Taylor_The-Computer-As-A-Communications-Device.pdf">The Computer as a Communication Device</a> by JCR Licklider and Robert W Taylor, 1968</li><li><a href="https://www.bibliophilebooks.com/worldbrain">World Brain</a> by HG Wells</li><li>Learn more about <a href="https://openmined.org/">OpenMined</a></li><li>We’re gonna be streaming LIVE at Climate Week — <a href="https://www.youtube.com/@TheMaybeMedia">subscribe to our Youtube</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>What if you could listen to multiple people at once, and actually understand them?</p><p><strong>More like this:</strong> <a href="https://www.themaybe.org/podcast/the-age-of-noise-w-eryk-salvaggio">**The Age of Noise w/ Eryk Salvaggio</a>**</p><p>In our final instalment (for now!) of Nodestar, Andrew Trask shares his vision for a world where we can assembly understanding from data everywhere. But not in a way that requires corporate control of our world.</p><p>If broadcasting is the act of talking to multiple people at once — what about broad <em>listening</em>? Where you listen to multiple sources of information, and actually learn something, without trampling over the control that individuals have over who sees what, when.</p><p>Andrew says that broad listening is difficult to achieve because of three huge problems: information overload, privacy, and veracity — and we are outsourcing these problems to central authorities, who abuse their power in deciding how to relay information to the public. What is Andrew doing at OpenMined to remedy this? Building protocols that decentralise access to training data for model development, obviously.</p><p><strong>Further Reading &amp; Resources</strong></p><ul><li><a href="https://internetat50.com/references/Licklider_Taylor_The-Computer-As-A-Communications-Device.pdf">The Computer as a Communication Device</a> by JCR Licklider and Robert W Taylor, 1968</li><li><a href="https://www.bibliophilebooks.com/worldbrain">World Brain</a> by HG Wells</li><li>Learn more about <a href="https://openmined.org/">OpenMined</a></li><li>We’re gonna be streaming LIVE at Climate Week — <a href="https://www.youtube.com/@TheMaybeMedia">subscribe to our Youtube</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </content:encoded>
      <pubDate>Fri, 12 Sep 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/a350ffdc/638ace63.mp3" length="62360942" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/AAIifRG-gQPTddCx3LPJ5DDtfGvFjIttytrGlKdrA5I/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yN2Rl/YWExMjgwMWMwOGU4/MDdlNGVlY2Y2OTFl/OTQwZC5wbmc.jpg"/>
      <itunes:duration>2595</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>What if you could listen to multiple people at once, and actually understand them?</p><p><strong>More like this:</strong> <a href="https://www.themaybe.org/podcast/the-age-of-noise-w-eryk-salvaggio">**The Age of Noise w/ Eryk Salvaggio</a>**</p><p>In our final instalment (for now!) of Nodestar, Andrew Trask shares his vision for a world where we can assembly understanding from data everywhere. But not in a way that requires corporate control of our world.</p><p>If broadcasting is the act of talking to multiple people at once — what about broad <em>listening</em>? Where you listen to multiple sources of information, and actually learn something, without trampling over the control that individuals have over who sees what, when.</p><p>Andrew says that broad listening is difficult to achieve because of three huge problems: information overload, privacy, and veracity — and we are outsourcing these problems to central authorities, who abuse their power in deciding how to relay information to the public. What is Andrew doing at OpenMined to remedy this? Building protocols that decentralise access to training data for model development, obviously.</p><p><strong>Further Reading &amp; Resources</strong></p><ul><li><a href="https://internetat50.com/references/Licklider_Taylor_The-Computer-As-A-Communications-Device.pdf">The Computer as a Communication Device</a> by JCR Licklider and Robert W Taylor, 1968</li><li><a href="https://www.bibliophilebooks.com/worldbrain">World Brain</a> by HG Wells</li><li>Learn more about <a href="https://openmined.org/">OpenMined</a></li><li>We’re gonna be streaming LIVE at Climate Week — <a href="https://www.youtube.com/@TheMaybeMedia">subscribe to our Youtube</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/a350ffdc/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Nodestar: Building Blacksky w/ Rudy Fraser</title>
      <itunes:episode>71</itunes:episode>
      <podcast:episode>71</podcast:episode>
      <itunes:title>Nodestar: Building Blacksky w/ Rudy Fraser</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">178d7da2-fd9a-46c1-b229-8101c95f5992</guid>
      <link>https://share.transistor.fm/s/b90e79e7</link>
      <description>
        <![CDATA[<p>Social media isn’t really social anymore. But that might be changing. Rudy Fraser over at Blacksky Algorithms has built something new. He has built the infrastructure to provide a safe online space for the black community, and in the process challenges the ideas of hierarchical, centralised networks. His work — even outside the very cool development of Blacksky — is an amazing, concrete example of how the abstract ambitions of decentralisation can provide real value for people, and sets us up for a new kind of tech politics.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/how-to-actually-keep-kids-safe-online-w-kate-sim"><strong>How to (actually) Keep Kids Safe Online w/ Kate Sim</strong></a><strong><br></strong><br></p><p>This is part two of Nodestar, our three-part series on decentralisation. Blacksky is a community built using the AT Protocol by Rudy Fraser. Rudy built this both out of a creative drive to make something new using protocol thinking, and out of frustration over a lack of safe community spaces for black folks where they could be themselves, and not have to experience anti-black racism or misogynoir as a price of entry.</p><p>Rudy and Alix discuss curation as moderation, the future of community stewardship, freeing ourselves from centralised content decision-making, how technology might connect with mutual aid, and the beauty of what he refers to as ‘dotted-line communities’.</p><p><strong>Further reading:</strong></p><ul><li><a href="https://www.blackskyweb.xyz/">Blacksky Algorithms</a></li><li><a href="https://blacksky.community/">Blacksky the app</a> — if you want an alternative to Bluesky</li><li>More about <a href="https://linkat.blue/rudyfraser.com">Rudy Fraser</a></li><li><a href="https://opencollective.com/">Open Collective</a> — a fiscal host for communities and non-profits</li><li><a href="https://www.papertree.earth/roots">Paper Tree</a> — community food bank</li><li><a href="https://osf.io/preprints/mediarxiv/sf432_v1">The Implicit Feudalism of Online Communities</a> by Nathan Schneider</li><li><a href="https://bsky.app/profile/did:plc:24kqkpfy6z7avtgu3qg57vvl">Flashes</a> — a 3rd party Bluesky app for viewing photos</li><li><a href="https://www.jofreeman.com/joreen/tyranny.htm">The Tyranny of Struturelessness</a> by Joreen</li></ul><p><em>Rudy is a technologist, community organizer, and founder of Blacksky Algorithms, where he builds decentralized social media infrastructure that prioritizes community-driven safety, data ownership, and interoperability. As a Fellow at the Applied Social Media Lab at Harvard’s Berkman Klein Center for Internet &amp; Society, he advances research and development on technology that empowers marginalized communities, particularly Black users</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Social media isn’t really social anymore. But that might be changing. Rudy Fraser over at Blacksky Algorithms has built something new. He has built the infrastructure to provide a safe online space for the black community, and in the process challenges the ideas of hierarchical, centralised networks. His work — even outside the very cool development of Blacksky — is an amazing, concrete example of how the abstract ambitions of decentralisation can provide real value for people, and sets us up for a new kind of tech politics.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/how-to-actually-keep-kids-safe-online-w-kate-sim"><strong>How to (actually) Keep Kids Safe Online w/ Kate Sim</strong></a><strong><br></strong><br></p><p>This is part two of Nodestar, our three-part series on decentralisation. Blacksky is a community built using the AT Protocol by Rudy Fraser. Rudy built this both out of a creative drive to make something new using protocol thinking, and out of frustration over a lack of safe community spaces for black folks where they could be themselves, and not have to experience anti-black racism or misogynoir as a price of entry.</p><p>Rudy and Alix discuss curation as moderation, the future of community stewardship, freeing ourselves from centralised content decision-making, how technology might connect with mutual aid, and the beauty of what he refers to as ‘dotted-line communities’.</p><p><strong>Further reading:</strong></p><ul><li><a href="https://www.blackskyweb.xyz/">Blacksky Algorithms</a></li><li><a href="https://blacksky.community/">Blacksky the app</a> — if you want an alternative to Bluesky</li><li>More about <a href="https://linkat.blue/rudyfraser.com">Rudy Fraser</a></li><li><a href="https://opencollective.com/">Open Collective</a> — a fiscal host for communities and non-profits</li><li><a href="https://www.papertree.earth/roots">Paper Tree</a> — community food bank</li><li><a href="https://osf.io/preprints/mediarxiv/sf432_v1">The Implicit Feudalism of Online Communities</a> by Nathan Schneider</li><li><a href="https://bsky.app/profile/did:plc:24kqkpfy6z7avtgu3qg57vvl">Flashes</a> — a 3rd party Bluesky app for viewing photos</li><li><a href="https://www.jofreeman.com/joreen/tyranny.htm">The Tyranny of Struturelessness</a> by Joreen</li></ul><p><em>Rudy is a technologist, community organizer, and founder of Blacksky Algorithms, where he builds decentralized social media infrastructure that prioritizes community-driven safety, data ownership, and interoperability. As a Fellow at the Applied Social Media Lab at Harvard’s Berkman Klein Center for Internet &amp; Society, he advances research and development on technology that empowers marginalized communities, particularly Black users</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 05 Sep 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/b90e79e7/40dabc86.mp3" length="60230502" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/ixlG_76_e5E3Eh6NZmq9Mr7QCaEctFGXg-y4H8cAm0I/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81Mjdk/NzgzZTZlMmRhNTY4/OGI4M2ZiYWE5MmQ0/Y2QwMS5wbmc.jpg"/>
      <itunes:duration>2506</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Social media isn’t really social anymore. But that might be changing. Rudy Fraser over at Blacksky Algorithms has built something new. He has built the infrastructure to provide a safe online space for the black community, and in the process challenges the ideas of hierarchical, centralised networks. His work — even outside the very cool development of Blacksky — is an amazing, concrete example of how the abstract ambitions of decentralisation can provide real value for people, and sets us up for a new kind of tech politics.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/how-to-actually-keep-kids-safe-online-w-kate-sim"><strong>How to (actually) Keep Kids Safe Online w/ Kate Sim</strong></a><strong><br></strong><br></p><p>This is part two of Nodestar, our three-part series on decentralisation. Blacksky is a community built using the AT Protocol by Rudy Fraser. Rudy built this both out of a creative drive to make something new using protocol thinking, and out of frustration over a lack of safe community spaces for black folks where they could be themselves, and not have to experience anti-black racism or misogynoir as a price of entry.</p><p>Rudy and Alix discuss curation as moderation, the future of community stewardship, freeing ourselves from centralised content decision-making, how technology might connect with mutual aid, and the beauty of what he refers to as ‘dotted-line communities’.</p><p><strong>Further reading:</strong></p><ul><li><a href="https://www.blackskyweb.xyz/">Blacksky Algorithms</a></li><li><a href="https://blacksky.community/">Blacksky the app</a> — if you want an alternative to Bluesky</li><li>More about <a href="https://linkat.blue/rudyfraser.com">Rudy Fraser</a></li><li><a href="https://opencollective.com/">Open Collective</a> — a fiscal host for communities and non-profits</li><li><a href="https://www.papertree.earth/roots">Paper Tree</a> — community food bank</li><li><a href="https://osf.io/preprints/mediarxiv/sf432_v1">The Implicit Feudalism of Online Communities</a> by Nathan Schneider</li><li><a href="https://bsky.app/profile/did:plc:24kqkpfy6z7avtgu3qg57vvl">Flashes</a> — a 3rd party Bluesky app for viewing photos</li><li><a href="https://www.jofreeman.com/joreen/tyranny.htm">The Tyranny of Struturelessness</a> by Joreen</li></ul><p><em>Rudy is a technologist, community organizer, and founder of Blacksky Algorithms, where he builds decentralized social media infrastructure that prioritizes community-driven safety, data ownership, and interoperability. As a Fellow at the Applied Social Media Lab at Harvard’s Berkman Klein Center for Internet &amp; Society, he advances research and development on technology that empowers marginalized communities, particularly Black users</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b90e79e7/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Nodestar: The Eternal September w/ Mike Masnick</title>
      <itunes:episode>70</itunes:episode>
      <podcast:episode>70</podcast:episode>
      <itunes:title>Nodestar: The Eternal September w/ Mike Masnick</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1688081f-fd28-4ef5-98af-f3a4cfc95314</guid>
      <link>https://share.transistor.fm/s/ea6209ba</link>
      <description>
        <![CDATA[<p>How did the internet become three companies in a trenchcoat? It wasn’t always that way! It used to be fun, and weird, and full of opportunity. To set the scene for the series, we spoke to a stalwart advocate of decentratilsation, Mike Masnick.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/big-techs-bogus-vision-for-the-future-w-paris-marx"><strong>Big Tech’s Bogus Vision for the Future w/ Paris Marx</strong></a><strong><br></strong><br></p><p>This is part one of Nodestar, a three-part series on decentralisation: how the internet started as a wild west of decentralised exploration, got centralised into the hands of a small number of companies, and how the pendulum has begun it’s swing in the other direction.</p><p>In this episode Mike Masnick gives us a history of the early internet — starting with what was called the Eternal September, when millions of AOL users flooded the scene, creating a messy, unpredictable, exciting ecosystem of open protocols and terrible UIs.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.techdirt.com/2019/08/28/protocols-not-platforms-technological-approach-to-free-speech/">Protocols, Not Platforms</a> by Mike Masnick</li><li><a href="https://techcrunch.com/2025/06/13/beyond-bluesky-these-are-the-apps-building-social-experiences-on-the-at-protocol/">List of apps being built on AT Protocol</a></li><li><a href="https://www.graze.social/">Graze</a> — a service to help you make custom feed with ads on AT proto</li><li><a href="https://www.techdirt.com/2025/03/12/my-new-podcast-otherwise-objectionable-explains-why-everyones-wrong-about-section-230/">Otherwise Objectionable</a> — an eight part podcast series on the history of section 230</li><li><a href="https://www.techdirt.com/edition/podcast/">Techdirt</a> podcast</li><li><a href="https://ctrlaltspeech.com/">CTRL-ALT-SPEECH</a> podast</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>How did the internet become three companies in a trenchcoat? It wasn’t always that way! It used to be fun, and weird, and full of opportunity. To set the scene for the series, we spoke to a stalwart advocate of decentratilsation, Mike Masnick.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/big-techs-bogus-vision-for-the-future-w-paris-marx"><strong>Big Tech’s Bogus Vision for the Future w/ Paris Marx</strong></a><strong><br></strong><br></p><p>This is part one of Nodestar, a three-part series on decentralisation: how the internet started as a wild west of decentralised exploration, got centralised into the hands of a small number of companies, and how the pendulum has begun it’s swing in the other direction.</p><p>In this episode Mike Masnick gives us a history of the early internet — starting with what was called the Eternal September, when millions of AOL users flooded the scene, creating a messy, unpredictable, exciting ecosystem of open protocols and terrible UIs.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.techdirt.com/2019/08/28/protocols-not-platforms-technological-approach-to-free-speech/">Protocols, Not Platforms</a> by Mike Masnick</li><li><a href="https://techcrunch.com/2025/06/13/beyond-bluesky-these-are-the-apps-building-social-experiences-on-the-at-protocol/">List of apps being built on AT Protocol</a></li><li><a href="https://www.graze.social/">Graze</a> — a service to help you make custom feed with ads on AT proto</li><li><a href="https://www.techdirt.com/2025/03/12/my-new-podcast-otherwise-objectionable-explains-why-everyones-wrong-about-section-230/">Otherwise Objectionable</a> — an eight part podcast series on the history of section 230</li><li><a href="https://www.techdirt.com/edition/podcast/">Techdirt</a> podcast</li><li><a href="https://ctrlaltspeech.com/">CTRL-ALT-SPEECH</a> podast</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p>]]>
      </content:encoded>
      <pubDate>Fri, 29 Aug 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/ea6209ba/9023a46f.mp3" length="75971726" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/x-b352D_YYxA_fVhHNh9fSbdeey7Ll_Y13loJt8bOMg/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kMTU2/YTdmYTFjYjFiZmVh/ZjU0M2Y2YWEyNGVi/M2ExYy5wbmc.jpg"/>
      <itunes:duration>3163</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>How did the internet become three companies in a trenchcoat? It wasn’t always that way! It used to be fun, and weird, and full of opportunity. To set the scene for the series, we spoke to a stalwart advocate of decentratilsation, Mike Masnick.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/big-techs-bogus-vision-for-the-future-w-paris-marx"><strong>Big Tech’s Bogus Vision for the Future w/ Paris Marx</strong></a><strong><br></strong><br></p><p>This is part one of Nodestar, a three-part series on decentralisation: how the internet started as a wild west of decentralised exploration, got centralised into the hands of a small number of companies, and how the pendulum has begun it’s swing in the other direction.</p><p>In this episode Mike Masnick gives us a history of the early internet — starting with what was called the Eternal September, when millions of AOL users flooded the scene, creating a messy, unpredictable, exciting ecosystem of open protocols and terrible UIs.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.techdirt.com/2019/08/28/protocols-not-platforms-technological-approach-to-free-speech/">Protocols, Not Platforms</a> by Mike Masnick</li><li><a href="https://techcrunch.com/2025/06/13/beyond-bluesky-these-are-the-apps-building-social-experiences-on-the-at-protocol/">List of apps being built on AT Protocol</a></li><li><a href="https://www.graze.social/">Graze</a> — a service to help you make custom feed with ads on AT proto</li><li><a href="https://www.techdirt.com/2025/03/12/my-new-podcast-otherwise-objectionable-explains-why-everyones-wrong-about-section-230/">Otherwise Objectionable</a> — an eight part podcast series on the history of section 230</li><li><a href="https://www.techdirt.com/edition/podcast/">Techdirt</a> podcast</li><li><a href="https://ctrlaltspeech.com/">CTRL-ALT-SPEECH</a> podast</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/ea6209ba/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Short: UK Groups Sue To Block Data Center Expansion</title>
      <itunes:episode>69</itunes:episode>
      <podcast:episode>69</podcast:episode>
      <itunes:title>Short: UK Groups Sue To Block Data Center Expansion</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">792889b4-f7bc-4843-b84d-695f62116151</guid>
      <link>https://share.transistor.fm/s/641c9260</link>
      <description>
        <![CDATA[<p>Foxglove and Global Action Plan have just sued the UK government over their YOLO hyperscale data center plans.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/net0-data-center-sprawl-new-research-from-the-maybe"><strong>Net0++: Data Centre Sprawl</strong></a><strong><br></strong><br></p><p>Local government rejected the data center. But Starmer’s administration overruled them. They want to force the development of a water-guzzling, energy draining data center on a local community who has said no. And all of this is on the green belt. The lawsuit filed this week might put a stop to those plans.</p><p>Alix sat down Ollie Hayes from Global Action Plan and Martha Dark from Foxglove to discuss the legal challenge filed this week. Why now? Aren’t the UK aiming for Net 0? And how does this relate to the UK government’s wider approach to AI?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.theguardian.com/environment/2025/aug/21/angela-rayner-hit-with-legal-challenge-over-datacentre-on-green-belt-land?link_id=3&amp;can_id=2f0b8f566723a602663d81f9f0d78d1d&amp;source=email-data-centres-were-taking-the-government-to-court-2&amp;email_referrer=email_2856631&amp;email_subject=uk-govt-in-court-over-data-centres&amp;&amp;">Read the Guardian article about the suit</a></li><li><a href="https://www.telegraph.co.uk/business/2025/08/21/angela-rayner-net-zero-legal-challenge-over-1bn-data-centre/">Read the Telegraph piece about the suit</a></li><li><a href="https://www.crowdjustice.com/case/datacentre-challenge/?link_id=2&amp;can_id=2f0b8f566723a602663d81f9f0d78d1d&amp;source=email-data-centres-were-taking-the-government-to-court-2&amp;email_referrer=email_2856631&amp;email_subject=uk-govt-in-court-over-data-centres&amp;&amp;">Donate to the campaign</a></li><li><a href="https://www.globalactionplan.org.uk/online-climate/climate-vs-big-tech/data-centres">Data Centre Finder</a> on Global Action Plan</li></ul><p><strong>Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email </strong><a href="mailto:pod@themaybe.org"><strong>pod@themaybe.org</strong></a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Foxglove and Global Action Plan have just sued the UK government over their YOLO hyperscale data center plans.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/net0-data-center-sprawl-new-research-from-the-maybe"><strong>Net0++: Data Centre Sprawl</strong></a><strong><br></strong><br></p><p>Local government rejected the data center. But Starmer’s administration overruled them. They want to force the development of a water-guzzling, energy draining data center on a local community who has said no. And all of this is on the green belt. The lawsuit filed this week might put a stop to those plans.</p><p>Alix sat down Ollie Hayes from Global Action Plan and Martha Dark from Foxglove to discuss the legal challenge filed this week. Why now? Aren’t the UK aiming for Net 0? And how does this relate to the UK government’s wider approach to AI?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.theguardian.com/environment/2025/aug/21/angela-rayner-hit-with-legal-challenge-over-datacentre-on-green-belt-land?link_id=3&amp;can_id=2f0b8f566723a602663d81f9f0d78d1d&amp;source=email-data-centres-were-taking-the-government-to-court-2&amp;email_referrer=email_2856631&amp;email_subject=uk-govt-in-court-over-data-centres&amp;&amp;">Read the Guardian article about the suit</a></li><li><a href="https://www.telegraph.co.uk/business/2025/08/21/angela-rayner-net-zero-legal-challenge-over-1bn-data-centre/">Read the Telegraph piece about the suit</a></li><li><a href="https://www.crowdjustice.com/case/datacentre-challenge/?link_id=2&amp;can_id=2f0b8f566723a602663d81f9f0d78d1d&amp;source=email-data-centres-were-taking-the-government-to-court-2&amp;email_referrer=email_2856631&amp;email_subject=uk-govt-in-court-over-data-centres&amp;&amp;">Donate to the campaign</a></li><li><a href="https://www.globalactionplan.org.uk/online-climate/climate-vs-big-tech/data-centres">Data Centre Finder</a> on Global Action Plan</li></ul><p><strong>Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email </strong><a href="mailto:pod@themaybe.org"><strong>pod@themaybe.org</strong></a></p>]]>
      </content:encoded>
      <pubDate>Fri, 22 Aug 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/641c9260/f174ac07.mp3" length="20110339" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/K9nckEojGSRsbqJqVTcKIukS7MxKSIaUa-id8b7T60c/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zODBk/NmQ0NzViNGQ3NGFi/MGJmYWVjNGYzNjEx/NDQ3ZC5wbmc.jpg"/>
      <itunes:duration>836</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Foxglove and Global Action Plan have just sued the UK government over their YOLO hyperscale data center plans.</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/net0-data-center-sprawl-new-research-from-the-maybe"><strong>Net0++: Data Centre Sprawl</strong></a><strong><br></strong><br></p><p>Local government rejected the data center. But Starmer’s administration overruled them. They want to force the development of a water-guzzling, energy draining data center on a local community who has said no. And all of this is on the green belt. The lawsuit filed this week might put a stop to those plans.</p><p>Alix sat down Ollie Hayes from Global Action Plan and Martha Dark from Foxglove to discuss the legal challenge filed this week. Why now? Aren’t the UK aiming for Net 0? And how does this relate to the UK government’s wider approach to AI?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.theguardian.com/environment/2025/aug/21/angela-rayner-hit-with-legal-challenge-over-datacentre-on-green-belt-land?link_id=3&amp;can_id=2f0b8f566723a602663d81f9f0d78d1d&amp;source=email-data-centres-were-taking-the-government-to-court-2&amp;email_referrer=email_2856631&amp;email_subject=uk-govt-in-court-over-data-centres&amp;&amp;">Read the Guardian article about the suit</a></li><li><a href="https://www.telegraph.co.uk/business/2025/08/21/angela-rayner-net-zero-legal-challenge-over-1bn-data-centre/">Read the Telegraph piece about the suit</a></li><li><a href="https://www.crowdjustice.com/case/datacentre-challenge/?link_id=2&amp;can_id=2f0b8f566723a602663d81f9f0d78d1d&amp;source=email-data-centres-were-taking-the-government-to-court-2&amp;email_referrer=email_2856631&amp;email_subject=uk-govt-in-court-over-data-centres&amp;&amp;">Donate to the campaign</a></li><li><a href="https://www.globalactionplan.org.uk/online-climate/climate-vs-big-tech/data-centres">Data Centre Finder</a> on Global Action Plan</li></ul><p><strong>Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email </strong><a href="mailto:pod@themaybe.org"><strong>pod@themaybe.org</strong></a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/641c9260/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Big Tech’s Bogus Vision for the Future w/ Paris Marx</title>
      <itunes:episode>68</itunes:episode>
      <podcast:episode>68</podcast:episode>
      <itunes:title>Big Tech’s Bogus Vision for the Future w/ Paris Marx</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">29c00039-dcf2-4e77-be3a-d30c684c4b36</guid>
      <link>https://share.transistor.fm/s/861a86b3</link>
      <description>
        <![CDATA[<p>What’s the deal with Silicon Valley selling imagined futures and never delivering on them. What are the consequences of an industry all-in on AI? What if we thought more deeply than just ‘more compute’?</p><p><strong>More like this: </strong><a href="https://saysmaybe.com/podcast/net-0-big-dirty-data-centres"><strong>Big Dirty Data Centres</strong></a><strong> with Boxi Wu and Jenna Ruddock<br></strong><br></p><p>This week, Paris Marx (host of Tech Won’t Save Us) joined Alix to chat about his recent work on hyperscale data centres, and his upcoming book on the subject</p><p>We discuss everything from the US shooting itself in the foot with it’s lack of meaningful industrial policy and how decades of lackluster political vision from governments created a vacuum that has now been filled with Silicon Valley's garbage ideas. And of course, how the US’s outsourcing of manufacturing to China has catalysed China’s domestic technological progress.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Buy <a href="https://bookshop.org/p/books/road-to-nowhere-what-silicon-valley-gets-wrong-about-the-future-of-transportation-paris-marx/17416701?aid=18331&amp;ean=9781839765889&amp;listref=cities-and-transport">Road To Nowhere: What Silicon Valley Gets Wrong About the Future of Transportation</a> by Paris Marx</li><li><a href="https://techwontsave.us/data-vampires">Data Vampires</a> — limited series on data centres by Tech Won’t Save Us</li><li><a href="https://uk.bookshop.org/p/books/untitled-business-book-patrick-mcgee/7729161">Apple in China</a> by Patrick McGee</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>What’s the deal with Silicon Valley selling imagined futures and never delivering on them. What are the consequences of an industry all-in on AI? What if we thought more deeply than just ‘more compute’?</p><p><strong>More like this: </strong><a href="https://saysmaybe.com/podcast/net-0-big-dirty-data-centres"><strong>Big Dirty Data Centres</strong></a><strong> with Boxi Wu and Jenna Ruddock<br></strong><br></p><p>This week, Paris Marx (host of Tech Won’t Save Us) joined Alix to chat about his recent work on hyperscale data centres, and his upcoming book on the subject</p><p>We discuss everything from the US shooting itself in the foot with it’s lack of meaningful industrial policy and how decades of lackluster political vision from governments created a vacuum that has now been filled with Silicon Valley's garbage ideas. And of course, how the US’s outsourcing of manufacturing to China has catalysed China’s domestic technological progress.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Buy <a href="https://bookshop.org/p/books/road-to-nowhere-what-silicon-valley-gets-wrong-about-the-future-of-transportation-paris-marx/17416701?aid=18331&amp;ean=9781839765889&amp;listref=cities-and-transport">Road To Nowhere: What Silicon Valley Gets Wrong About the Future of Transportation</a> by Paris Marx</li><li><a href="https://techwontsave.us/data-vampires">Data Vampires</a> — limited series on data centres by Tech Won’t Save Us</li><li><a href="https://uk.bookshop.org/p/books/untitled-business-book-patrick-mcgee/7729161">Apple in China</a> by Patrick McGee</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p>]]>
      </content:encoded>
      <pubDate>Fri, 22 Aug 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/861a86b3/9431e0ec.mp3" length="59390022" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/q2sedoO9uVPBTK2LVpFvH7T567J-pRqq6AVq7RMMI6M/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hYzQ0/ZWJhOTNmZGM1Nzc5/ZmE2YWQ1OWVmOTI0/MTRmMi5wbmc.jpg"/>
      <itunes:duration>2472</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>What’s the deal with Silicon Valley selling imagined futures and never delivering on them. What are the consequences of an industry all-in on AI? What if we thought more deeply than just ‘more compute’?</p><p><strong>More like this: </strong><a href="https://saysmaybe.com/podcast/net-0-big-dirty-data-centres"><strong>Big Dirty Data Centres</strong></a><strong> with Boxi Wu and Jenna Ruddock<br></strong><br></p><p>This week, Paris Marx (host of Tech Won’t Save Us) joined Alix to chat about his recent work on hyperscale data centres, and his upcoming book on the subject</p><p>We discuss everything from the US shooting itself in the foot with it’s lack of meaningful industrial policy and how decades of lackluster political vision from governments created a vacuum that has now been filled with Silicon Valley's garbage ideas. And of course, how the US’s outsourcing of manufacturing to China has catalysed China’s domestic technological progress.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Buy <a href="https://bookshop.org/p/books/road-to-nowhere-what-silicon-valley-gets-wrong-about-the-future-of-transportation-paris-marx/17416701?aid=18331&amp;ean=9781839765889&amp;listref=cities-and-transport">Road To Nowhere: What Silicon Valley Gets Wrong About the Future of Transportation</a> by Paris Marx</li><li><a href="https://techwontsave.us/data-vampires">Data Vampires</a> — limited series on data centres by Tech Won’t Save Us</li><li><a href="https://uk.bookshop.org/p/books/untitled-business-book-patrick-mcgee/7729161">Apple in China</a> by Patrick McGee</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/861a86b3/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Consciously Uncoupling from Silicon Valley w/ Cori Crider</title>
      <itunes:episode>67</itunes:episode>
      <podcast:episode>67</podcast:episode>
      <itunes:title>Consciously Uncoupling from Silicon Valley w/ Cori Crider</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a7f5e6bf-95fa-4ace-85d2-09bfdb4b8318</guid>
      <link>https://share.transistor.fm/s/7174ddfa</link>
      <description>
        <![CDATA[<p>How do we yank power out of tech oligarch hands without handing it over to someone else?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/is-digitisation-killing-democracy-w-marietje-schaake"><strong>Is Digitisation Killing Democracy? w/ Marietje Schaake</strong></a><strong><br></strong><br></p><p>Cori Crider is a fearless litigator turned market-shaping advocate. She started litigating during many years at leading human rights organisation Reprieve, and then moved on to co-founding Foxglove so she could sue big tech. Now she’s set her sights on market concentration.</p><p>Cori’s analysis concludes with a hopeful message: we are not stuck in place with eight dudes running the show. In fact, we’ve been here before. The computer age never would have happened the way it did if thousands of patents weren’t liberated from Bell Labs in 1956. How can we use similar tactics to dethrone monopolies and think about how Europe and other large jurisdictions can decouple themselves from silicon valley infrastructure?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.ftc.gov/system/files/ftc_gov/pdf/antitrust-policy-for-the-conservative-meador.pdf">Antitrust Policy for the Conservative</a> by Mark Meader of the FTC</li><li><a href="https://www.openmarketsinstitute.org/">The Open Markets Institute</a></li><li><a href="https://futureinstitute.tech/">The Future of Tech Institute</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Do you have an idea for the show? Email <a href="mailto:pod@themaybe.org">pod@themaybe.org</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>How do we yank power out of tech oligarch hands without handing it over to someone else?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/is-digitisation-killing-democracy-w-marietje-schaake"><strong>Is Digitisation Killing Democracy? w/ Marietje Schaake</strong></a><strong><br></strong><br></p><p>Cori Crider is a fearless litigator turned market-shaping advocate. She started litigating during many years at leading human rights organisation Reprieve, and then moved on to co-founding Foxglove so she could sue big tech. Now she’s set her sights on market concentration.</p><p>Cori’s analysis concludes with a hopeful message: we are not stuck in place with eight dudes running the show. In fact, we’ve been here before. The computer age never would have happened the way it did if thousands of patents weren’t liberated from Bell Labs in 1956. How can we use similar tactics to dethrone monopolies and think about how Europe and other large jurisdictions can decouple themselves from silicon valley infrastructure?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.ftc.gov/system/files/ftc_gov/pdf/antitrust-policy-for-the-conservative-meador.pdf">Antitrust Policy for the Conservative</a> by Mark Meader of the FTC</li><li><a href="https://www.openmarketsinstitute.org/">The Open Markets Institute</a></li><li><a href="https://futureinstitute.tech/">The Future of Tech Institute</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Do you have an idea for the show? Email <a href="mailto:pod@themaybe.org">pod@themaybe.org</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 15 Aug 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/7174ddfa/7c2d48bd.mp3" length="77444651" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3225</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>How do we yank power out of tech oligarch hands without handing it over to someone else?</p><p><strong>More like this: </strong><a href="https://www.themaybe.org/podcast/is-digitisation-killing-democracy-w-marietje-schaake"><strong>Is Digitisation Killing Democracy? w/ Marietje Schaake</strong></a><strong><br></strong><br></p><p>Cori Crider is a fearless litigator turned market-shaping advocate. She started litigating during many years at leading human rights organisation Reprieve, and then moved on to co-founding Foxglove so she could sue big tech. Now she’s set her sights on market concentration.</p><p>Cori’s analysis concludes with a hopeful message: we are not stuck in place with eight dudes running the show. In fact, we’ve been here before. The computer age never would have happened the way it did if thousands of patents weren’t liberated from Bell Labs in 1956. How can we use similar tactics to dethrone monopolies and think about how Europe and other large jurisdictions can decouple themselves from silicon valley infrastructure?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.ftc.gov/system/files/ftc_gov/pdf/antitrust-policy-for-the-conservative-meador.pdf">Antitrust Policy for the Conservative</a> by Mark Meader of the FTC</li><li><a href="https://www.openmarketsinstitute.org/">The Open Markets Institute</a></li><li><a href="https://futureinstitute.tech/">The Future of Tech Institute</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Do you have an idea for the show? Email <a href="mailto:pod@themaybe.org">pod@themaybe.org</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/7174ddfa/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>After the FAccT: Labour and Misrepresentation</title>
      <itunes:episode>66</itunes:episode>
      <podcast:episode>66</podcast:episode>
      <itunes:title>After the FAccT: Labour and Misrepresentation</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4ae930e4-ee93-4926-b8ce-c5cb1ed76f7d</guid>
      <link>https://share.transistor.fm/s/9da9068d</link>
      <description>
        <![CDATA[<p>Did you miss FAccT? We interviewed some of our favourite session organisers!</p><p><strong>More like this: Part One of our FAccT roundup: </strong><a href="https://www.themaybe.org/podcast/after-the-facct-materiality-and-militarisation"><strong>Materiality and Militarisation</strong></a><strong>.<br></strong><br></p><p>Georgia, Soizic, and Hanna from The Maybe team just went to FAccT. Georgia and Soizic interviewed a bunch of amazing researchers, practitioners, and artists to give you a taste of what the conference was like if you didn’t get to go. Alix missed it too — you’ll learn along with her!</p><p>In part two we look into how AI is used to misrepresent people through things like image generation, and even care labour. These are conversations about AI misrepresenting hidden identities, care work becoming data work, how pride and identity is tied to labour — and how labour organisers are building solidarity and movement around this.</p><p>Who features in this episode:</p><ul><li><a href="https://priyagoswami.net/">Priya Goswami</a> brought a multimedia exhibition to FAccT: <a href="https://programs.sigchi.org/facct/2025/program/content/198141">Digital Bharat</a>. This explores the invisibilised care work and manual labour by women in India, and how their day-to-day has become mediated by digital public infrastructures.</li><li><a href="https://kimiwenzel.com/">Kimi Wenzel</a> organised <a href="https://programs.sigchi.org/facct/2025/program/content/198140">Invisible by Design? Generative AI and Mirrors of Misrepresentation</a>, which invited users to confront generated images of themselves and discuss issues of representation within these systems.</li><li><a href="https://alex-hanna.com/">Alex Hanna</a> and <a href="https://techworkerhandbook.org/stories/clarissa-redwine/">Clarissa Redwine</a> ran the <a href="https://programs.sigchi.org/facct/2025/program/content/198130">AI Workers Inquiry</a>, which brought people together to share in how AI has transformed their work, identify common ground, and potentially begin building resistance.</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://techworkerscoalition.org/circuit-breakers/">Circuit Breakers</a> — tech worker conference organised by Clarissa Redwine</li><li><a href="https://scholar.google.com/citations?user=z-9xGHgAAAAJ&amp;hl=en">Kimi Wenzel’s research</a></li><li>Buy <a href="https://thecon.ai/">The AI Con</a> by Alex Hanna and Emily Bender</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Did you miss FAccT? We interviewed some of our favourite session organisers!</p><p><strong>More like this: Part One of our FAccT roundup: </strong><a href="https://www.themaybe.org/podcast/after-the-facct-materiality-and-militarisation"><strong>Materiality and Militarisation</strong></a><strong>.<br></strong><br></p><p>Georgia, Soizic, and Hanna from The Maybe team just went to FAccT. Georgia and Soizic interviewed a bunch of amazing researchers, practitioners, and artists to give you a taste of what the conference was like if you didn’t get to go. Alix missed it too — you’ll learn along with her!</p><p>In part two we look into how AI is used to misrepresent people through things like image generation, and even care labour. These are conversations about AI misrepresenting hidden identities, care work becoming data work, how pride and identity is tied to labour — and how labour organisers are building solidarity and movement around this.</p><p>Who features in this episode:</p><ul><li><a href="https://priyagoswami.net/">Priya Goswami</a> brought a multimedia exhibition to FAccT: <a href="https://programs.sigchi.org/facct/2025/program/content/198141">Digital Bharat</a>. This explores the invisibilised care work and manual labour by women in India, and how their day-to-day has become mediated by digital public infrastructures.</li><li><a href="https://kimiwenzel.com/">Kimi Wenzel</a> organised <a href="https://programs.sigchi.org/facct/2025/program/content/198140">Invisible by Design? Generative AI and Mirrors of Misrepresentation</a>, which invited users to confront generated images of themselves and discuss issues of representation within these systems.</li><li><a href="https://alex-hanna.com/">Alex Hanna</a> and <a href="https://techworkerhandbook.org/stories/clarissa-redwine/">Clarissa Redwine</a> ran the <a href="https://programs.sigchi.org/facct/2025/program/content/198130">AI Workers Inquiry</a>, which brought people together to share in how AI has transformed their work, identify common ground, and potentially begin building resistance.</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://techworkerscoalition.org/circuit-breakers/">Circuit Breakers</a> — tech worker conference organised by Clarissa Redwine</li><li><a href="https://scholar.google.com/citations?user=z-9xGHgAAAAJ&amp;hl=en">Kimi Wenzel’s research</a></li><li>Buy <a href="https://thecon.ai/">The AI Con</a> by Alex Hanna and Emily Bender</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p>]]>
      </content:encoded>
      <pubDate>Fri, 25 Jul 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/9da9068d/904dcfef.mp3" length="73334882" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3054</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Did you miss FAccT? We interviewed some of our favourite session organisers!</p><p><strong>More like this: Part One of our FAccT roundup: </strong><a href="https://www.themaybe.org/podcast/after-the-facct-materiality-and-militarisation"><strong>Materiality and Militarisation</strong></a><strong>.<br></strong><br></p><p>Georgia, Soizic, and Hanna from The Maybe team just went to FAccT. Georgia and Soizic interviewed a bunch of amazing researchers, practitioners, and artists to give you a taste of what the conference was like if you didn’t get to go. Alix missed it too — you’ll learn along with her!</p><p>In part two we look into how AI is used to misrepresent people through things like image generation, and even care labour. These are conversations about AI misrepresenting hidden identities, care work becoming data work, how pride and identity is tied to labour — and how labour organisers are building solidarity and movement around this.</p><p>Who features in this episode:</p><ul><li><a href="https://priyagoswami.net/">Priya Goswami</a> brought a multimedia exhibition to FAccT: <a href="https://programs.sigchi.org/facct/2025/program/content/198141">Digital Bharat</a>. This explores the invisibilised care work and manual labour by women in India, and how their day-to-day has become mediated by digital public infrastructures.</li><li><a href="https://kimiwenzel.com/">Kimi Wenzel</a> organised <a href="https://programs.sigchi.org/facct/2025/program/content/198140">Invisible by Design? Generative AI and Mirrors of Misrepresentation</a>, which invited users to confront generated images of themselves and discuss issues of representation within these systems.</li><li><a href="https://alex-hanna.com/">Alex Hanna</a> and <a href="https://techworkerhandbook.org/stories/clarissa-redwine/">Clarissa Redwine</a> ran the <a href="https://programs.sigchi.org/facct/2025/program/content/198130">AI Workers Inquiry</a>, which brought people together to share in how AI has transformed their work, identify common ground, and potentially begin building resistance.</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://techworkerscoalition.org/circuit-breakers/">Circuit Breakers</a> — tech worker conference organised by Clarissa Redwine</li><li><a href="https://scholar.google.com/citations?user=z-9xGHgAAAAJ&amp;hl=en">Kimi Wenzel’s research</a></li><li>Buy <a href="https://thecon.ai/">The AI Con</a> by Alex Hanna and Emily Bender</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/9da9068d/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Short: Musk: Reanimating Apartheid w/ Nic Dawes</title>
      <itunes:episode>65</itunes:episode>
      <podcast:episode>65</podcast:episode>
      <itunes:title>Short: Musk: Reanimating Apartheid w/ Nic Dawes</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">75adeec0-92cd-4dfc-9044-e14d69bad994</guid>
      <link>https://share.transistor.fm/s/c0a2c1e3</link>
      <description>
        <![CDATA[<p>In May, Grok couldn’t stop talking about white genocide. This injection of right-wing South African politics triggered a conversation with a Musk contemporary, Nic Dawes.</p><p>In this short Nic shares his perspective on how post-apartheid white communities have dealt with apartheid’s end. And how Musk is basically seeking out an information environment that can recreate the apartheid information system: Grok is just an extension of a media ecosystem designed to soothe guilt and stoke resentment.</p><p><strong>Computer Says Maybe Shorts cover recent news with an expert in our network. If there is a news story you want us to cover, please email </strong><a href="mailto:pod@themaybe.org"><strong>pod@themaybe.org</strong></a><strong><br></strong><br></p><p><em>Nic is Executive Director at THE CITY, a news outlet serving the people of New York through independent journalism that holds the powerful to account, deepens democratic participation, and helps make sense of the greatest city in the world. He has led news and human rights organizations on three continents, and was previously Deputy Executive Director of Human Rights Watch, Chief Content Officer of Hindustan Times in Delhi, and Editor-in-Chief of South Africa's Mail &amp; Guardian newspaper.</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In May, Grok couldn’t stop talking about white genocide. This injection of right-wing South African politics triggered a conversation with a Musk contemporary, Nic Dawes.</p><p>In this short Nic shares his perspective on how post-apartheid white communities have dealt with apartheid’s end. And how Musk is basically seeking out an information environment that can recreate the apartheid information system: Grok is just an extension of a media ecosystem designed to soothe guilt and stoke resentment.</p><p><strong>Computer Says Maybe Shorts cover recent news with an expert in our network. If there is a news story you want us to cover, please email </strong><a href="mailto:pod@themaybe.org"><strong>pod@themaybe.org</strong></a><strong><br></strong><br></p><p><em>Nic is Executive Director at THE CITY, a news outlet serving the people of New York through independent journalism that holds the powerful to account, deepens democratic participation, and helps make sense of the greatest city in the world. He has led news and human rights organizations on three continents, and was previously Deputy Executive Director of Human Rights Watch, Chief Content Officer of Hindustan Times in Delhi, and Editor-in-Chief of South Africa's Mail &amp; Guardian newspaper.</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 23 Jul 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/c0a2c1e3/389ee690.mp3" length="20694185" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>860</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In May, Grok couldn’t stop talking about white genocide. This injection of right-wing South African politics triggered a conversation with a Musk contemporary, Nic Dawes.</p><p>In this short Nic shares his perspective on how post-apartheid white communities have dealt with apartheid’s end. And how Musk is basically seeking out an information environment that can recreate the apartheid information system: Grok is just an extension of a media ecosystem designed to soothe guilt and stoke resentment.</p><p><strong>Computer Says Maybe Shorts cover recent news with an expert in our network. If there is a news story you want us to cover, please email </strong><a href="mailto:pod@themaybe.org"><strong>pod@themaybe.org</strong></a><strong><br></strong><br></p><p><em>Nic is Executive Director at THE CITY, a news outlet serving the people of New York through independent journalism that holds the powerful to account, deepens democratic participation, and helps make sense of the greatest city in the world. He has led news and human rights organizations on three continents, and was previously Deputy Executive Director of Human Rights Watch, Chief Content Officer of Hindustan Times in Delhi, and Editor-in-Chief of South Africa's Mail &amp; Guardian newspaper.</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/c0a2c1e3/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>After the FAccT: Materiality and Militarisation</title>
      <itunes:episode>64</itunes:episode>
      <podcast:episode>64</podcast:episode>
      <itunes:title>After the FAccT: Materiality and Militarisation</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">84a4440e-1712-412f-bd13-e0fab21af5d0</guid>
      <link>https://share.transistor.fm/s/1cedd59f</link>
      <description>
        <![CDATA[<p>Georgia, Soizic, and Hanna from The Maybe team just went to FAccT. Georgia and Soizic interviewed a bunch of amazing researchers, practitioners, and artists to give you a taste of what the conference was like if you didn’t get to go. Alix missed it too — you’ll learn along with her!</p><p>In part one we explore the depth of AI’s hidden material impacts, including its use in military applications and to aid genocide. One of our interviewees talked about why they spoke up at the town hall — questioning why FAccT, the biggest AI ethics conference there is, accepts sponsorship from those same military contractors.</p><p>Who we interviewed for Part One:</p><ul><li><a href="https://www.charispapaevangelou.eu/">Charis Papaevangelou</a> who co-organised a CRAFT session called <a href="https://programs.sigchi.org/facct/2025/program/content/198967">The Hidden Costs of Digital Sovereignty</a>. Greece is trying to position itself as a central digital hub by building data centres and participating in the ‘fourth industrial revolution’ — but what does this actually mean for the people and infrastructure of Greece?</li><li><a href="https://fourcoffees.github.io/georgiapanagiotidou/">Georgia Panagiotidou</a> ran a session on <a href="https://programs.sigchi.org/facct/2025/program/content/198128">The Tools and Tactics for Supporting Agency in AI Environmental Action</a> — offering some ideas on how the community can get together and meaningfully resist extractive practices.</li><li><a href="https://davidwidder.me/">David Widder</a> discussed his workshop on <a href="https://programs.sigchi.org/facct/2025/program/content/198395">Silicon Valley and The Pentagon</a>, and his research on the recent history of the DoD funding academic papers — is it ever worth taking military money, even for basic research?</li><li><a href="https://weandai.org/team/tania-duarte/">Tania Duarte</a> offered something very different: a <a href="https://programs.sigchi.org/facct/2025/program/content/198134">demonstration of two workshops</a> she runs for marginalised groups, to better explain the true materiality of AI, and build knowledge that gives people more agency over the dominant narratives and framings in the industry.</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li>Recording of Charis’s CRAFT session: <a href="https://www.youtube.com/watch?v=Xb2UhtBsr2w">The Hidden Cost of Digital Sovereignty</a></li><li><a href="https://enainstitute.org/en/publication/cloud-hiding-undersea-cables-data-centers-in-the-mediterranean-crossroads/">Cloud hiding undersea: Cables &amp; Data Centers in the Mediterranean crossroads</a> by Theodora Kostaka</li><li><a href="https://davidwidder.me/dod.pdf">Basic Research, Lethal Effects: Military AI Research Funding as Enlistment</a> and <a href="https://www.nature.com/articles/s41586-024-08141-1">Why ‘open’ AI systems are actually closed and why this matters</a> by David Widder</li><li><a href="https://www.youtube.com/watch?v=LFroWd0L5as&amp;t=2376s">The video</a> that David quoted the Carnegie Mellon professor from — David was paraphrasing in the episode!</li><li><a href="https://weandai.org/">We and AI</a> &amp; <a href="https://betterimagesofai.org/">Better Images of AI</a></li><li>More on <a href="https://fourcoffees.github.io/georgiapanagiotidou/">Georgia Panagiotidou’s</a> work and <a href="https://fourcoffees.github.io/environmentaltactics/">resources from her session</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Georgia, Soizic, and Hanna from The Maybe team just went to FAccT. Georgia and Soizic interviewed a bunch of amazing researchers, practitioners, and artists to give you a taste of what the conference was like if you didn’t get to go. Alix missed it too — you’ll learn along with her!</p><p>In part one we explore the depth of AI’s hidden material impacts, including its use in military applications and to aid genocide. One of our interviewees talked about why they spoke up at the town hall — questioning why FAccT, the biggest AI ethics conference there is, accepts sponsorship from those same military contractors.</p><p>Who we interviewed for Part One:</p><ul><li><a href="https://www.charispapaevangelou.eu/">Charis Papaevangelou</a> who co-organised a CRAFT session called <a href="https://programs.sigchi.org/facct/2025/program/content/198967">The Hidden Costs of Digital Sovereignty</a>. Greece is trying to position itself as a central digital hub by building data centres and participating in the ‘fourth industrial revolution’ — but what does this actually mean for the people and infrastructure of Greece?</li><li><a href="https://fourcoffees.github.io/georgiapanagiotidou/">Georgia Panagiotidou</a> ran a session on <a href="https://programs.sigchi.org/facct/2025/program/content/198128">The Tools and Tactics for Supporting Agency in AI Environmental Action</a> — offering some ideas on how the community can get together and meaningfully resist extractive practices.</li><li><a href="https://davidwidder.me/">David Widder</a> discussed his workshop on <a href="https://programs.sigchi.org/facct/2025/program/content/198395">Silicon Valley and The Pentagon</a>, and his research on the recent history of the DoD funding academic papers — is it ever worth taking military money, even for basic research?</li><li><a href="https://weandai.org/team/tania-duarte/">Tania Duarte</a> offered something very different: a <a href="https://programs.sigchi.org/facct/2025/program/content/198134">demonstration of two workshops</a> she runs for marginalised groups, to better explain the true materiality of AI, and build knowledge that gives people more agency over the dominant narratives and framings in the industry.</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li>Recording of Charis’s CRAFT session: <a href="https://www.youtube.com/watch?v=Xb2UhtBsr2w">The Hidden Cost of Digital Sovereignty</a></li><li><a href="https://enainstitute.org/en/publication/cloud-hiding-undersea-cables-data-centers-in-the-mediterranean-crossroads/">Cloud hiding undersea: Cables &amp; Data Centers in the Mediterranean crossroads</a> by Theodora Kostaka</li><li><a href="https://davidwidder.me/dod.pdf">Basic Research, Lethal Effects: Military AI Research Funding as Enlistment</a> and <a href="https://www.nature.com/articles/s41586-024-08141-1">Why ‘open’ AI systems are actually closed and why this matters</a> by David Widder</li><li><a href="https://www.youtube.com/watch?v=LFroWd0L5as&amp;t=2376s">The video</a> that David quoted the Carnegie Mellon professor from — David was paraphrasing in the episode!</li><li><a href="https://weandai.org/">We and AI</a> &amp; <a href="https://betterimagesofai.org/">Better Images of AI</a></li><li>More on <a href="https://fourcoffees.github.io/georgiapanagiotidou/">Georgia Panagiotidou’s</a> work and <a href="https://fourcoffees.github.io/environmentaltactics/">resources from her session</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p>]]>
      </content:encoded>
      <pubDate>Fri, 18 Jul 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/1cedd59f/5e167811.mp3" length="92680608" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3860</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Georgia, Soizic, and Hanna from The Maybe team just went to FAccT. Georgia and Soizic interviewed a bunch of amazing researchers, practitioners, and artists to give you a taste of what the conference was like if you didn’t get to go. Alix missed it too — you’ll learn along with her!</p><p>In part one we explore the depth of AI’s hidden material impacts, including its use in military applications and to aid genocide. One of our interviewees talked about why they spoke up at the town hall — questioning why FAccT, the biggest AI ethics conference there is, accepts sponsorship from those same military contractors.</p><p>Who we interviewed for Part One:</p><ul><li><a href="https://www.charispapaevangelou.eu/">Charis Papaevangelou</a> who co-organised a CRAFT session called <a href="https://programs.sigchi.org/facct/2025/program/content/198967">The Hidden Costs of Digital Sovereignty</a>. Greece is trying to position itself as a central digital hub by building data centres and participating in the ‘fourth industrial revolution’ — but what does this actually mean for the people and infrastructure of Greece?</li><li><a href="https://fourcoffees.github.io/georgiapanagiotidou/">Georgia Panagiotidou</a> ran a session on <a href="https://programs.sigchi.org/facct/2025/program/content/198128">The Tools and Tactics for Supporting Agency in AI Environmental Action</a> — offering some ideas on how the community can get together and meaningfully resist extractive practices.</li><li><a href="https://davidwidder.me/">David Widder</a> discussed his workshop on <a href="https://programs.sigchi.org/facct/2025/program/content/198395">Silicon Valley and The Pentagon</a>, and his research on the recent history of the DoD funding academic papers — is it ever worth taking military money, even for basic research?</li><li><a href="https://weandai.org/team/tania-duarte/">Tania Duarte</a> offered something very different: a <a href="https://programs.sigchi.org/facct/2025/program/content/198134">demonstration of two workshops</a> she runs for marginalised groups, to better explain the true materiality of AI, and build knowledge that gives people more agency over the dominant narratives and framings in the industry.</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li>Recording of Charis’s CRAFT session: <a href="https://www.youtube.com/watch?v=Xb2UhtBsr2w">The Hidden Cost of Digital Sovereignty</a></li><li><a href="https://enainstitute.org/en/publication/cloud-hiding-undersea-cables-data-centers-in-the-mediterranean-crossroads/">Cloud hiding undersea: Cables &amp; Data Centers in the Mediterranean crossroads</a> by Theodora Kostaka</li><li><a href="https://davidwidder.me/dod.pdf">Basic Research, Lethal Effects: Military AI Research Funding as Enlistment</a> and <a href="https://www.nature.com/articles/s41586-024-08141-1">Why ‘open’ AI systems are actually closed and why this matters</a> by David Widder</li><li><a href="https://www.youtube.com/watch?v=LFroWd0L5as&amp;t=2376s">The video</a> that David quoted the Carnegie Mellon professor from — David was paraphrasing in the episode!</li><li><a href="https://weandai.org/">We and AI</a> &amp; <a href="https://betterimagesofai.org/">Better Images of AI</a></li><li>More on <a href="https://fourcoffees.github.io/georgiapanagiotidou/">Georgia Panagiotidou’s</a> work and <a href="https://fourcoffees.github.io/environmentaltactics/">resources from her session</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/1cedd59f/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Making Myths to Make Money w/ AI Now</title>
      <itunes:episode>63</itunes:episode>
      <podcast:episode>63</podcast:episode>
      <itunes:title>Making Myths to Make Money w/ AI Now</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7425e36e-8510-4aed-b586-38d57395bab9</guid>
      <link>https://share.transistor.fm/s/3b5d8638</link>
      <description>
        <![CDATA[<p>AI Now have just released their 2025 AI Landscape report — <a href="https://ainowinstitute.org/wp-content/uploads/2025/06/FINAL-20250609_AINowLandscapeReport_Full.pdf">Artificial Power</a>. Alix sat down with two of it’s authors, Amba Kak and Sarah Myers-West for a light unpacking of the themes within.</p><p>This report isn’t a boring survey of what AI Now have been doing this year; it’s a comprehensive view of the state of AI, and the concentrated powers that prop it up. What are the latest AI-shaped solutions that the hype guys are trying to convince us are real? And how can we reclaim a positive agenda for innovation — and unstick ourselves from a path towards pseudo religious AGI.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Read the AI Now 2025 Landscape Report: <a href="https://ainowinstitute.org/wp-content/uploads/2025/06/FINAL-20250609_AINowLandscapeReport_Full.pdf">Artificial Power</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><a href="https://www.linkedin.com/in/amba-kak-48569b108/">*Amba Kak</a> has spent the last fifteen years designing and advocating for technology policy in the public interest, across government, industry, and civil society roles – and in many parts of the world. Amba brings this experience to her current role co-directing AI Now, a New York-based research institute where she leads on advancing diagnosis and actionable policy to tackle concerns with artificial intelligence and concentrated power. She has served as Senior Advisor on AI to the Federal Trade Commission and was recognized as one of TIME’s 100 Most Influential People in AI in 2024.*</p><p><a href="https://www.sarahmyerswest.com/">*Sarah Myers-West</a> has spent the last fifteen years interrogating the role of technology companies and their emergence as powerful political actors on the front lines of international governance. Sarah brings this depth of expertise to policymaking in her current role co-directing AI Now, with a focus on addressing the market incentives and infrastructures that shape tech’s role in society at large and ensuring it serves the interests of the public. Her forthcoming book, Tracing Code (University of California Press) draws on years of historical and social science research to examine the origins of data capitalism and commercial surveillance.*</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>AI Now have just released their 2025 AI Landscape report — <a href="https://ainowinstitute.org/wp-content/uploads/2025/06/FINAL-20250609_AINowLandscapeReport_Full.pdf">Artificial Power</a>. Alix sat down with two of it’s authors, Amba Kak and Sarah Myers-West for a light unpacking of the themes within.</p><p>This report isn’t a boring survey of what AI Now have been doing this year; it’s a comprehensive view of the state of AI, and the concentrated powers that prop it up. What are the latest AI-shaped solutions that the hype guys are trying to convince us are real? And how can we reclaim a positive agenda for innovation — and unstick ourselves from a path towards pseudo religious AGI.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Read the AI Now 2025 Landscape Report: <a href="https://ainowinstitute.org/wp-content/uploads/2025/06/FINAL-20250609_AINowLandscapeReport_Full.pdf">Artificial Power</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><a href="https://www.linkedin.com/in/amba-kak-48569b108/">*Amba Kak</a> has spent the last fifteen years designing and advocating for technology policy in the public interest, across government, industry, and civil society roles – and in many parts of the world. Amba brings this experience to her current role co-directing AI Now, a New York-based research institute where she leads on advancing diagnosis and actionable policy to tackle concerns with artificial intelligence and concentrated power. She has served as Senior Advisor on AI to the Federal Trade Commission and was recognized as one of TIME’s 100 Most Influential People in AI in 2024.*</p><p><a href="https://www.sarahmyerswest.com/">*Sarah Myers-West</a> has spent the last fifteen years interrogating the role of technology companies and their emergence as powerful political actors on the front lines of international governance. Sarah brings this depth of expertise to policymaking in her current role co-directing AI Now, with a focus on addressing the market incentives and infrastructures that shape tech’s role in society at large and ensuring it serves the interests of the public. Her forthcoming book, Tracing Code (University of California Press) draws on years of historical and social science research to examine the origins of data capitalism and commercial surveillance.*</p>]]>
      </content:encoded>
      <pubDate>Fri, 11 Jul 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/3b5d8638/69f433b3.mp3" length="54970638" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2288</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>AI Now have just released their 2025 AI Landscape report — <a href="https://ainowinstitute.org/wp-content/uploads/2025/06/FINAL-20250609_AINowLandscapeReport_Full.pdf">Artificial Power</a>. Alix sat down with two of it’s authors, Amba Kak and Sarah Myers-West for a light unpacking of the themes within.</p><p>This report isn’t a boring survey of what AI Now have been doing this year; it’s a comprehensive view of the state of AI, and the concentrated powers that prop it up. What are the latest AI-shaped solutions that the hype guys are trying to convince us are real? And how can we reclaim a positive agenda for innovation — and unstick ourselves from a path towards pseudo religious AGI.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Read the AI Now 2025 Landscape Report: <a href="https://ainowinstitute.org/wp-content/uploads/2025/06/FINAL-20250609_AINowLandscapeReport_Full.pdf">Artificial Power</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><a href="https://www.linkedin.com/in/amba-kak-48569b108/">*Amba Kak</a> has spent the last fifteen years designing and advocating for technology policy in the public interest, across government, industry, and civil society roles – and in many parts of the world. Amba brings this experience to her current role co-directing AI Now, a New York-based research institute where she leads on advancing diagnosis and actionable policy to tackle concerns with artificial intelligence and concentrated power. She has served as Senior Advisor on AI to the Federal Trade Commission and was recognized as one of TIME’s 100 Most Influential People in AI in 2024.*</p><p><a href="https://www.sarahmyerswest.com/">*Sarah Myers-West</a> has spent the last fifteen years interrogating the role of technology companies and their emergence as powerful political actors on the front lines of international governance. Sarah brings this depth of expertise to policymaking in her current role co-directing AI Now, with a focus on addressing the market incentives and infrastructures that shape tech’s role in society at large and ensuring it serves the interests of the public. Her forthcoming book, Tracing Code (University of California Press) draws on years of historical and social science research to examine the origins of data capitalism and commercial surveillance.*</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/3b5d8638/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Is Computer Science Made for Dudes? w/ Felienne Hermans</title>
      <itunes:episode>62</itunes:episode>
      <podcast:episode>62</podcast:episode>
      <itunes:title>Is Computer Science Made for Dudes? w/ Felienne Hermans</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">65866e02-1acb-4d1d-97d5-f09abda69e41</guid>
      <link>https://share.transistor.fm/s/8b386c4e</link>
      <description>
        <![CDATA[<p>Felienne Hermans calls herself an ‘involuntary ethnographer of computer science’. She studies the culture behind programming, and challenges the dominant idea that learning to program has to be painful. Alix and Felienne chat about the history of programming and how it went from multidisciplinary and inclusive, to masochistic and exclusive. They also dig into all the ways it excludes women and people who do not speak English.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://scratch.mit.edu/">Scratch</a> — a high level programming language aimed at kids</li><li><a href="https://www.hedy.org/">Hedy</a> — the programming language that Felienne designed</li><li><a href="https://www.hedy.org/join">Join in and help out with Hedy!</a></li><li><a href="https://gendermag.org/corporatetraining/">GenderMag</a> by Margaret Burnett — how to ensure more gender inclusiveness in your software</li><li><a href="https://elm-lang.org/">Elm</a> — an easy and kind browser-based programming language</li><li><a href="https://dl.acm.org/doi/10.1145/3689492.3689809">A Case for Feminism in Programming Language Design</a> by Felienne Hermans &amp; Ari Schlesinger</li><li><a href="https://hedy.org/research/A_Framework_for_the_Localization_of_Programming_Languages_2023.pdf">A Framework for the Localization of Programming Languages</a> by Felienne Hermans &amp; Alaaeddin Swidan</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</strong></p><p><em>Felienne is the creator of the </em><a href="http://www.hedy.org/"><em>Hedy programming language</em></a><em>, a gradual and multi-lingual programming language designed for teaching. She is the author of “</em><a href="https://www.felienne.com/book"><em>The Programmer’s Brain</em></a><em>“, a book that helps programmers understand how their brain works and how to use it more effectively. In 2021, Felienne was awarded the </em><a href="https://khmw.nl/nederlandse_prijs_ict-onderzoek/"><em>Dutch Prize for ICT research.</em></a><em> She also has a weekly column on BNR, a Dutch radio station.</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Felienne Hermans calls herself an ‘involuntary ethnographer of computer science’. She studies the culture behind programming, and challenges the dominant idea that learning to program has to be painful. Alix and Felienne chat about the history of programming and how it went from multidisciplinary and inclusive, to masochistic and exclusive. They also dig into all the ways it excludes women and people who do not speak English.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://scratch.mit.edu/">Scratch</a> — a high level programming language aimed at kids</li><li><a href="https://www.hedy.org/">Hedy</a> — the programming language that Felienne designed</li><li><a href="https://www.hedy.org/join">Join in and help out with Hedy!</a></li><li><a href="https://gendermag.org/corporatetraining/">GenderMag</a> by Margaret Burnett — how to ensure more gender inclusiveness in your software</li><li><a href="https://elm-lang.org/">Elm</a> — an easy and kind browser-based programming language</li><li><a href="https://dl.acm.org/doi/10.1145/3689492.3689809">A Case for Feminism in Programming Language Design</a> by Felienne Hermans &amp; Ari Schlesinger</li><li><a href="https://hedy.org/research/A_Framework_for_the_Localization_of_Programming_Languages_2023.pdf">A Framework for the Localization of Programming Languages</a> by Felienne Hermans &amp; Alaaeddin Swidan</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</strong></p><p><em>Felienne is the creator of the </em><a href="http://www.hedy.org/"><em>Hedy programming language</em></a><em>, a gradual and multi-lingual programming language designed for teaching. She is the author of “</em><a href="https://www.felienne.com/book"><em>The Programmer’s Brain</em></a><em>“, a book that helps programmers understand how their brain works and how to use it more effectively. In 2021, Felienne was awarded the </em><a href="https://khmw.nl/nederlandse_prijs_ict-onderzoek/"><em>Dutch Prize for ICT research.</em></a><em> She also has a weekly column on BNR, a Dutch radio station.</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 04 Jul 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/8b386c4e/197703aa.mp3" length="78806403" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3281</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Felienne Hermans calls herself an ‘involuntary ethnographer of computer science’. She studies the culture behind programming, and challenges the dominant idea that learning to program has to be painful. Alix and Felienne chat about the history of programming and how it went from multidisciplinary and inclusive, to masochistic and exclusive. They also dig into all the ways it excludes women and people who do not speak English.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://scratch.mit.edu/">Scratch</a> — a high level programming language aimed at kids</li><li><a href="https://www.hedy.org/">Hedy</a> — the programming language that Felienne designed</li><li><a href="https://www.hedy.org/join">Join in and help out with Hedy!</a></li><li><a href="https://gendermag.org/corporatetraining/">GenderMag</a> by Margaret Burnett — how to ensure more gender inclusiveness in your software</li><li><a href="https://elm-lang.org/">Elm</a> — an easy and kind browser-based programming language</li><li><a href="https://dl.acm.org/doi/10.1145/3689492.3689809">A Case for Feminism in Programming Language Design</a> by Felienne Hermans &amp; Ari Schlesinger</li><li><a href="https://hedy.org/research/A_Framework_for_the_Localization_of_Programming_Languages_2023.pdf">A Framework for the Localization of Programming Languages</a> by Felienne Hermans &amp; Alaaeddin Swidan</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</strong></p><p><em>Felienne is the creator of the </em><a href="http://www.hedy.org/"><em>Hedy programming language</em></a><em>, a gradual and multi-lingual programming language designed for teaching. She is the author of “</em><a href="https://www.felienne.com/book"><em>The Programmer’s Brain</em></a><em>“, a book that helps programmers understand how their brain works and how to use it more effectively. In 2021, Felienne was awarded the </em><a href="https://khmw.nl/nederlandse_prijs_ict-onderzoek/"><em>Dutch Prize for ICT research.</em></a><em> She also has a weekly column on BNR, a Dutch radio station.</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/8b386c4e/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Elephant in the Algorithm: Live from ZEG Fest in Tbilisi</title>
      <itunes:episode>61</itunes:episode>
      <podcast:episode>61</podcast:episode>
      <itunes:title>The Elephant in the Algorithm: Live from ZEG Fest in Tbilisi</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">287692ef-22aa-42aa-917f-5a9e680c6e4c</guid>
      <link>https://share.transistor.fm/s/4cd5a594</link>
      <description>
        <![CDATA[<p>Smart people focused on technology politics issues get it. We trade high level helpful concepts like surveillance capitalism, automated inequality, and enshittification. And even as some of these ideas are making it more mainstream, normies aren’t getting the message. We need stories for that. But how? How do we take the technical jargon and high-level concepts that dominate tech narratives and instead create stories that are personal, relatable, and powerful?</p><p>And how do we combat the amazing hero-god narratives of Silicon Valley without reinforcing them?</p><p>Alix went to storytelling festival ZEG Fest in Tbilisi to chat with three amazing storytellers about that challenge:</p><ul><li><strong>Armando Iannucci, creator of </strong><strong><em>Veep</em></strong><strong> and </strong><strong><em>The Thick of It:</em></strong> who discusses how to use humour and satire to keep things simple — and that stories are not ‘made up’, but rather a way to relay a series of facts and concepts that are complex and difficult to process.</li><li><strong>Chris Wylie, Cambridge Analytica whistleblower</strong>: on how the promise of superintelligence and transhumanism is basically like a religious prophecy. His new show <a href="https://www.audible.co.uk/pd/Captured-Audiobook/B0DZJ5GNT1?eac_link=rfR5PUwfRRgV&amp;ref=web_search_eac_asin_1&amp;eac_selected_type=asin&amp;eac_selected=B0DZJ5GNT1&amp;qid=pDbT6OaEW0&amp;eac_id=259-6202472-3675342_pDbT6OaEW0&amp;sr=1-2">Captured</a> explores the stories that tech elites are telling us about our utopian AI future.</li><li><em>Adam Pincus, producer of The Laundromat and Leave no Trace:</em> shares his frustrations with the perceived inevitability of AI in his day to day, and also tells us more about his podcast series ‘<a href="https://www.audible.co.uk/pd/What-Could-Go-Wrong-Audiobook/B0F4GX4XF7?qid=1750160973&amp;sr=1-1&amp;ref_pageloadid=not_applicable&amp;pf_rd_p=c6e316b8-14da-418d-8f91-b3cad83c5183&amp;pf_rd_r=3Q0NZW282CD9TBF2Q5ND&amp;plink=qvNIoNj4qR7VMEDv&amp;pageLoadId=AI4PB3bEqvdd2xGt&amp;creativeId=41e85e98-10b8-40e2-907d-6b663f04a42d&amp;ref=a_search_c3_lProduct_1_1">What Could Go Wrong?</a>’ in which he explores writing a Contagion sequel with director Scott Burns.</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.audible.co.uk/pd/Captured-Audiobook/B0DZJ5GNT1?eac_link=rfR5PUwfRRgV&amp;ref=web_search_eac_asin_1&amp;eac_selected_type=asin&amp;eac_selected=B0DZJ5GNT1&amp;qid=pDbT6OaEW0&amp;eac_id=259-6202472-3675342_pDbT6OaEW0&amp;sr=1-2"><strong>Captured: The Secret Behind Silicon Valley’s AI Takeover</strong></a> — limited podcast series featuring Chris Wylie</li><li><a href="https://variety.com/2025/digital/news/contagion-sequel-scott-z-burns-audible-what-could-go-wrong-1236369638/">**‘Contagion’ Screenwriter Scott Z. Burns Asks AI to Write a Sequel to Pandemic Film in Audible Original Series ‘What Could Go Wrong?’</a>** — Variety article</li><li><a href="https://www.audible.co.uk/pd/What-Could-Go-Wrong-Audiobook/B0F4GX4XF7?eac_link=jrPGHtJlrjwQ&amp;ref=web_search_eac_asin_1&amp;eac_selected_type=asin&amp;eac_selected=B0F4GX4XF7&amp;qid=0u12KemBA3&amp;eac_id=259-6202472-3675342_0u12KemBA3&amp;sr=1-1"><strong>What Could Go Wrong?</strong></a> — limited podcast series by Scott Burns</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Smart people focused on technology politics issues get it. We trade high level helpful concepts like surveillance capitalism, automated inequality, and enshittification. And even as some of these ideas are making it more mainstream, normies aren’t getting the message. We need stories for that. But how? How do we take the technical jargon and high-level concepts that dominate tech narratives and instead create stories that are personal, relatable, and powerful?</p><p>And how do we combat the amazing hero-god narratives of Silicon Valley without reinforcing them?</p><p>Alix went to storytelling festival ZEG Fest in Tbilisi to chat with three amazing storytellers about that challenge:</p><ul><li><strong>Armando Iannucci, creator of </strong><strong><em>Veep</em></strong><strong> and </strong><strong><em>The Thick of It:</em></strong> who discusses how to use humour and satire to keep things simple — and that stories are not ‘made up’, but rather a way to relay a series of facts and concepts that are complex and difficult to process.</li><li><strong>Chris Wylie, Cambridge Analytica whistleblower</strong>: on how the promise of superintelligence and transhumanism is basically like a religious prophecy. His new show <a href="https://www.audible.co.uk/pd/Captured-Audiobook/B0DZJ5GNT1?eac_link=rfR5PUwfRRgV&amp;ref=web_search_eac_asin_1&amp;eac_selected_type=asin&amp;eac_selected=B0DZJ5GNT1&amp;qid=pDbT6OaEW0&amp;eac_id=259-6202472-3675342_pDbT6OaEW0&amp;sr=1-2">Captured</a> explores the stories that tech elites are telling us about our utopian AI future.</li><li><em>Adam Pincus, producer of The Laundromat and Leave no Trace:</em> shares his frustrations with the perceived inevitability of AI in his day to day, and also tells us more about his podcast series ‘<a href="https://www.audible.co.uk/pd/What-Could-Go-Wrong-Audiobook/B0F4GX4XF7?qid=1750160973&amp;sr=1-1&amp;ref_pageloadid=not_applicable&amp;pf_rd_p=c6e316b8-14da-418d-8f91-b3cad83c5183&amp;pf_rd_r=3Q0NZW282CD9TBF2Q5ND&amp;plink=qvNIoNj4qR7VMEDv&amp;pageLoadId=AI4PB3bEqvdd2xGt&amp;creativeId=41e85e98-10b8-40e2-907d-6b663f04a42d&amp;ref=a_search_c3_lProduct_1_1">What Could Go Wrong?</a>’ in which he explores writing a Contagion sequel with director Scott Burns.</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.audible.co.uk/pd/Captured-Audiobook/B0DZJ5GNT1?eac_link=rfR5PUwfRRgV&amp;ref=web_search_eac_asin_1&amp;eac_selected_type=asin&amp;eac_selected=B0DZJ5GNT1&amp;qid=pDbT6OaEW0&amp;eac_id=259-6202472-3675342_pDbT6OaEW0&amp;sr=1-2"><strong>Captured: The Secret Behind Silicon Valley’s AI Takeover</strong></a> — limited podcast series featuring Chris Wylie</li><li><a href="https://variety.com/2025/digital/news/contagion-sequel-scott-z-burns-audible-what-could-go-wrong-1236369638/">**‘Contagion’ Screenwriter Scott Z. Burns Asks AI to Write a Sequel to Pandemic Film in Audible Original Series ‘What Could Go Wrong?’</a>** — Variety article</li><li><a href="https://www.audible.co.uk/pd/What-Could-Go-Wrong-Audiobook/B0F4GX4XF7?eac_link=jrPGHtJlrjwQ&amp;ref=web_search_eac_asin_1&amp;eac_selected_type=asin&amp;eac_selected=B0F4GX4XF7&amp;qid=0u12KemBA3&amp;eac_id=259-6202472-3675342_0u12KemBA3&amp;sr=1-1"><strong>What Could Go Wrong?</strong></a> — limited podcast series by Scott Burns</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </content:encoded>
      <pubDate>Fri, 27 Jun 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/4cd5a594/124ac5f1.mp3" length="66650413" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2775</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Smart people focused on technology politics issues get it. We trade high level helpful concepts like surveillance capitalism, automated inequality, and enshittification. And even as some of these ideas are making it more mainstream, normies aren’t getting the message. We need stories for that. But how? How do we take the technical jargon and high-level concepts that dominate tech narratives and instead create stories that are personal, relatable, and powerful?</p><p>And how do we combat the amazing hero-god narratives of Silicon Valley without reinforcing them?</p><p>Alix went to storytelling festival ZEG Fest in Tbilisi to chat with three amazing storytellers about that challenge:</p><ul><li><strong>Armando Iannucci, creator of </strong><strong><em>Veep</em></strong><strong> and </strong><strong><em>The Thick of It:</em></strong> who discusses how to use humour and satire to keep things simple — and that stories are not ‘made up’, but rather a way to relay a series of facts and concepts that are complex and difficult to process.</li><li><strong>Chris Wylie, Cambridge Analytica whistleblower</strong>: on how the promise of superintelligence and transhumanism is basically like a religious prophecy. His new show <a href="https://www.audible.co.uk/pd/Captured-Audiobook/B0DZJ5GNT1?eac_link=rfR5PUwfRRgV&amp;ref=web_search_eac_asin_1&amp;eac_selected_type=asin&amp;eac_selected=B0DZJ5GNT1&amp;qid=pDbT6OaEW0&amp;eac_id=259-6202472-3675342_pDbT6OaEW0&amp;sr=1-2">Captured</a> explores the stories that tech elites are telling us about our utopian AI future.</li><li><em>Adam Pincus, producer of The Laundromat and Leave no Trace:</em> shares his frustrations with the perceived inevitability of AI in his day to day, and also tells us more about his podcast series ‘<a href="https://www.audible.co.uk/pd/What-Could-Go-Wrong-Audiobook/B0F4GX4XF7?qid=1750160973&amp;sr=1-1&amp;ref_pageloadid=not_applicable&amp;pf_rd_p=c6e316b8-14da-418d-8f91-b3cad83c5183&amp;pf_rd_r=3Q0NZW282CD9TBF2Q5ND&amp;plink=qvNIoNj4qR7VMEDv&amp;pageLoadId=AI4PB3bEqvdd2xGt&amp;creativeId=41e85e98-10b8-40e2-907d-6b663f04a42d&amp;ref=a_search_c3_lProduct_1_1">What Could Go Wrong?</a>’ in which he explores writing a Contagion sequel with director Scott Burns.</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.audible.co.uk/pd/Captured-Audiobook/B0DZJ5GNT1?eac_link=rfR5PUwfRRgV&amp;ref=web_search_eac_asin_1&amp;eac_selected_type=asin&amp;eac_selected=B0DZJ5GNT1&amp;qid=pDbT6OaEW0&amp;eac_id=259-6202472-3675342_pDbT6OaEW0&amp;sr=1-2"><strong>Captured: The Secret Behind Silicon Valley’s AI Takeover</strong></a> — limited podcast series featuring Chris Wylie</li><li><a href="https://variety.com/2025/digital/news/contagion-sequel-scott-z-burns-audible-what-could-go-wrong-1236369638/">**‘Contagion’ Screenwriter Scott Z. Burns Asks AI to Write a Sequel to Pandemic Film in Audible Original Series ‘What Could Go Wrong?’</a>** — Variety article</li><li><a href="https://www.audible.co.uk/pd/What-Could-Go-Wrong-Audiobook/B0F4GX4XF7?eac_link=jrPGHtJlrjwQ&amp;ref=web_search_eac_asin_1&amp;eac_selected_type=asin&amp;eac_selected=B0F4GX4XF7&amp;qid=0u12KemBA3&amp;eac_id=259-6202472-3675342_0u12KemBA3&amp;sr=1-1"><strong>What Could Go Wrong?</strong></a> — limited podcast series by Scott Burns</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/4cd5a594/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Is Digitisation Killing Democracy? w/ Marietje Schaake</title>
      <itunes:episode>60</itunes:episode>
      <podcast:episode>60</podcast:episode>
      <itunes:title>Is Digitisation Killing Democracy? w/ Marietje Schaake</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b339fbfd-beb8-48fe-a58d-0cb54149061a</guid>
      <link>https://share.transistor.fm/s/2b382ec7</link>
      <description>
        <![CDATA[<p>There has been an intentional and systematic narrative push that tells governments they are not good enough to provide their own public infrastructure or regulate tech companies that provide it for them.</p><p>Shocking: these narratives stem from large tech companies, and this represents what Marietje Schaake refers to as a Tech Coup — which is the title of her book (which you should buy!).</p><p>The Tech Coup refers to the inability of democratic policymakers to provide oversight, regulation, and even visibility into the structural systems that big tech is building, managing, and selling. Marietje and Alix discuss what happens when you have a system of states whose knowledge and confidence have been gutted over decades — hindering them from providing good services, and understanding how to meaningfully regulate the tech space.</p><p><strong>Further Reading &amp; Resources:</strong></p><ul><li>Buy <a href="https://thetechcoup.com/">The Tech Coup</a> by Marietje Schaake</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Marietje Schaake is a non-resident Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of </em><a href="https://thetechcoup.com/"><em>**The Tech Coup</em></a><em>.</em>**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>There has been an intentional and systematic narrative push that tells governments they are not good enough to provide their own public infrastructure or regulate tech companies that provide it for them.</p><p>Shocking: these narratives stem from large tech companies, and this represents what Marietje Schaake refers to as a Tech Coup — which is the title of her book (which you should buy!).</p><p>The Tech Coup refers to the inability of democratic policymakers to provide oversight, regulation, and even visibility into the structural systems that big tech is building, managing, and selling. Marietje and Alix discuss what happens when you have a system of states whose knowledge and confidence have been gutted over decades — hindering them from providing good services, and understanding how to meaningfully regulate the tech space.</p><p><strong>Further Reading &amp; Resources:</strong></p><ul><li>Buy <a href="https://thetechcoup.com/">The Tech Coup</a> by Marietje Schaake</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Marietje Schaake is a non-resident Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of </em><a href="https://thetechcoup.com/"><em>**The Tech Coup</em></a><em>.</em>**</p>]]>
      </content:encoded>
      <pubDate>Fri, 20 Jun 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/2b382ec7/471d8ad0.mp3" length="54667479" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2276</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>There has been an intentional and systematic narrative push that tells governments they are not good enough to provide their own public infrastructure or regulate tech companies that provide it for them.</p><p>Shocking: these narratives stem from large tech companies, and this represents what Marietje Schaake refers to as a Tech Coup — which is the title of her book (which you should buy!).</p><p>The Tech Coup refers to the inability of democratic policymakers to provide oversight, regulation, and even visibility into the structural systems that big tech is building, managing, and selling. Marietje and Alix discuss what happens when you have a system of states whose knowledge and confidence have been gutted over decades — hindering them from providing good services, and understanding how to meaningfully regulate the tech space.</p><p><strong>Further Reading &amp; Resources:</strong></p><ul><li>Buy <a href="https://thetechcoup.com/">The Tech Coup</a> by Marietje Schaake</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Marietje Schaake is a non-resident Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of </em><a href="https://thetechcoup.com/"><em>**The Tech Coup</em></a><em>.</em>**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/2b382ec7/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>AI in Gaza: Live from Mexico City</title>
      <itunes:episode>59</itunes:episode>
      <podcast:episode>59</podcast:episode>
      <itunes:title>AI in Gaza: Live from Mexico City</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c8ff4980-146b-4e83-90e3-0946d06c3a94</guid>
      <link>https://share.transistor.fm/s/5d16ca9c</link>
      <description>
        <![CDATA[<p><strong><em>This episode contains some descriptions of torture methods, automated human targeting by machines, and psychological warfare throughout<br></em></strong><br></p><p>Last week Alix hosted a live show in Mexico City right after <a href="https://realml.org/">REAL ML</a>. Four panellists discussed a huge important topic, which has been wrongfully deemed as taboo by other conferences: the use of AI and other technologies to support the ongoing genocide in Palestine.</p><p>Here’s a preview of what the four speakers shared:</p><ul><li><strong>Karen Palacio AKA kardaver</strong> gave us an overview of Operation Condor — a program of psychological warfare that ran in the late 20th century in South America to suppress activist voices.</li><li><strong>Marwa Fatafta</strong> explains how these methods are still used today against Palestinians; there are coordinated surveillance projects that make Palestinian citizens feel they are living in a panopticon, and the granular data storage and processing is facilitated by AWS, Google, and Azure.</li><li><strong>Matt Mahmoudi</strong> goes on to describe how these surveillance projects have crystallised into sophisticated CCTV and facial recognition networks through which Palestinians are continuously dehumanised via face-scanning and arbitrary checks that restrict movements.</li><li><strong>Wanda Muñez</strong> discusses how fully autonomous weapons obviously violate human rights in all kinds of ways — but ‘AI ethics’ frameworks never make any considerations for machines that make life or death decisions.</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://files.libcom.org/files/keith-breckenridge-biometric-state-the-global-politics-of-identification-and-surveillance-in-south-africa-1850-to-the-present.pdf">The Biometric State by Keith Breckenridge</a> — where the phrase ‘automated apartheid’ was conceived</li><li><a href="https://karen-pal.github.io/COGWAR_REPORT/realml.html">COGWAR Report</a> by Karen Palacio, AKA Kardaver</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</strong></p><p><strong><em>Wanda Muñez</em></strong><em> is an international consultant with twenty years of experience in the design, implementation and evaluation of programs and policies on human rights, gender equality, inclusion and the rights of people with disabilities. Wanda has worked for international NGOs and UN organizations in Asia, Africa, Europe and Latin America. She became involved in the field of artificial intelligence in 2017, initially through the analysis of its intersection with International Humanitarian Law in the issues of autonomous weapons systems; and later focusing on the intersection between human rights and AI. In 2020, she was nominated by the Ministry of Foreign Affairs of Mexico as an independent expert at the Global Alliance on Artificial Intelligence (GPAI), where she contributed to various publications and panels, and led the design of the research “Towards true gender equality and diversity in AI” that is currently being implemented. In 2020, Wanda Muñoz was recognized by the Nobel Women's Initiative as "a peacebuilder working for peace, justice and equality" and by UNLIREC as one of Latin America's "forces of change, working for humanitarian disarmament, non-proliferation and arms control. Wanda also just recently won </em><a href="https://www.linkedin.com/posts/womeninai_womeninai-deichampion-wai2025-activity-7336860776034377729-E-E1?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAvnKXYBjUxg9zyMxrqJbx1Jk2KrRg03L3w"><em>DEI Champion of Year</em></a><em> Award from </em><a href="https://www.womeninai.co/"><em>Women in AI</em></a><em>.<br></em><br></p><p><strong><em>Karen Palacio, aka kardaver</em></strong><em>, is an interdisciplinary digital artist, industrial programmer specialized in AI, and data scientist from Córdoba, Argentina. She researches and creates through iterative loops of implementation and reflection, aiming to understand what it means to articulate artistic-technological discourses from the Global South. Her performances, installations, and audiovisual works engage critically and rootedly with the depths of computation, the histories of computing and archives, freedom of knowledge, feminisms, and the pursuit of technological sovereignty. She develops and works with Free Software in her processes, resemanticizing technologies she knows from her background as an industrial programmer.<br></em><br></p><p><strong><em>Dr Matt Mahmoudi</em></strong><em> is Assistant Professor in Digital Humanities at the University of Cambridge, and a Researcher/Advisor on Artificial Intelligence and Human Rights at Amnesty International. Matt’s work has looked at AI-driven surveillance from the NYPD’s surveillance machine to Automated Apartheid in the occupied Palestinian territory. Matt is author of Migrants in the Digital Periphery: New Urban Frontiers of Controls (University of California Press, February 2025), and co-editor of Resisting Borders &amp; Technologies of Violence (Haymarket, 2024) together with Mizue Aizeki and Coline Schupfer.<br></em><br></p><p><strong><em>Marwa Fatafta</em></strong><em> leads Access Now’s policy and advocacy work on digital rights in the Middle East and North Africa (MENA) region. Her work spans a number of issues at the nexus of human rights and technology including content governance and platform accountability, online censorship, digital surveillance, and transnational repression. She has written extensively on the digital occupation in Palestine and focuses on the role of new technologies in armed conflicts and humanitarian contexts and their impact on historically marginalized and oppressed communities. Marwa is a Policy Analyst at Al-Shabaka: The Palestinian Policy Network, an advisory board member of the Tahrir Institute for Middle East Policy, and an advisory committee member for Bread&amp;Net. Marwa was a Fulbright scholar in the US and holds an MA in International Relations from Maxwell School of Citizenship and Public Affairs, Syracuse University. She holds a second MA in Development and Governance from the University of Duisburg-Essen.</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><strong><em>This episode contains some descriptions of torture methods, automated human targeting by machines, and psychological warfare throughout<br></em></strong><br></p><p>Last week Alix hosted a live show in Mexico City right after <a href="https://realml.org/">REAL ML</a>. Four panellists discussed a huge important topic, which has been wrongfully deemed as taboo by other conferences: the use of AI and other technologies to support the ongoing genocide in Palestine.</p><p>Here’s a preview of what the four speakers shared:</p><ul><li><strong>Karen Palacio AKA kardaver</strong> gave us an overview of Operation Condor — a program of psychological warfare that ran in the late 20th century in South America to suppress activist voices.</li><li><strong>Marwa Fatafta</strong> explains how these methods are still used today against Palestinians; there are coordinated surveillance projects that make Palestinian citizens feel they are living in a panopticon, and the granular data storage and processing is facilitated by AWS, Google, and Azure.</li><li><strong>Matt Mahmoudi</strong> goes on to describe how these surveillance projects have crystallised into sophisticated CCTV and facial recognition networks through which Palestinians are continuously dehumanised via face-scanning and arbitrary checks that restrict movements.</li><li><strong>Wanda Muñez</strong> discusses how fully autonomous weapons obviously violate human rights in all kinds of ways — but ‘AI ethics’ frameworks never make any considerations for machines that make life or death decisions.</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://files.libcom.org/files/keith-breckenridge-biometric-state-the-global-politics-of-identification-and-surveillance-in-south-africa-1850-to-the-present.pdf">The Biometric State by Keith Breckenridge</a> — where the phrase ‘automated apartheid’ was conceived</li><li><a href="https://karen-pal.github.io/COGWAR_REPORT/realml.html">COGWAR Report</a> by Karen Palacio, AKA Kardaver</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</strong></p><p><strong><em>Wanda Muñez</em></strong><em> is an international consultant with twenty years of experience in the design, implementation and evaluation of programs and policies on human rights, gender equality, inclusion and the rights of people with disabilities. Wanda has worked for international NGOs and UN organizations in Asia, Africa, Europe and Latin America. She became involved in the field of artificial intelligence in 2017, initially through the analysis of its intersection with International Humanitarian Law in the issues of autonomous weapons systems; and later focusing on the intersection between human rights and AI. In 2020, she was nominated by the Ministry of Foreign Affairs of Mexico as an independent expert at the Global Alliance on Artificial Intelligence (GPAI), where she contributed to various publications and panels, and led the design of the research “Towards true gender equality and diversity in AI” that is currently being implemented. In 2020, Wanda Muñoz was recognized by the Nobel Women's Initiative as "a peacebuilder working for peace, justice and equality" and by UNLIREC as one of Latin America's "forces of change, working for humanitarian disarmament, non-proliferation and arms control. Wanda also just recently won </em><a href="https://www.linkedin.com/posts/womeninai_womeninai-deichampion-wai2025-activity-7336860776034377729-E-E1?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAvnKXYBjUxg9zyMxrqJbx1Jk2KrRg03L3w"><em>DEI Champion of Year</em></a><em> Award from </em><a href="https://www.womeninai.co/"><em>Women in AI</em></a><em>.<br></em><br></p><p><strong><em>Karen Palacio, aka kardaver</em></strong><em>, is an interdisciplinary digital artist, industrial programmer specialized in AI, and data scientist from Córdoba, Argentina. She researches and creates through iterative loops of implementation and reflection, aiming to understand what it means to articulate artistic-technological discourses from the Global South. Her performances, installations, and audiovisual works engage critically and rootedly with the depths of computation, the histories of computing and archives, freedom of knowledge, feminisms, and the pursuit of technological sovereignty. She develops and works with Free Software in her processes, resemanticizing technologies she knows from her background as an industrial programmer.<br></em><br></p><p><strong><em>Dr Matt Mahmoudi</em></strong><em> is Assistant Professor in Digital Humanities at the University of Cambridge, and a Researcher/Advisor on Artificial Intelligence and Human Rights at Amnesty International. Matt’s work has looked at AI-driven surveillance from the NYPD’s surveillance machine to Automated Apartheid in the occupied Palestinian territory. Matt is author of Migrants in the Digital Periphery: New Urban Frontiers of Controls (University of California Press, February 2025), and co-editor of Resisting Borders &amp; Technologies of Violence (Haymarket, 2024) together with Mizue Aizeki and Coline Schupfer.<br></em><br></p><p><strong><em>Marwa Fatafta</em></strong><em> leads Access Now’s policy and advocacy work on digital rights in the Middle East and North Africa (MENA) region. Her work spans a number of issues at the nexus of human rights and technology including content governance and platform accountability, online censorship, digital surveillance, and transnational repression. She has written extensively on the digital occupation in Palestine and focuses on the role of new technologies in armed conflicts and humanitarian contexts and their impact on historically marginalized and oppressed communities. Marwa is a Policy Analyst at Al-Shabaka: The Palestinian Policy Network, an advisory board member of the Tahrir Institute for Middle East Policy, and an advisory committee member for Bread&amp;Net. Marwa was a Fulbright scholar in the US and holds an MA in International Relations from Maxwell School of Citizenship and Public Affairs, Syracuse University. She holds a second MA in Development and Governance from the University of Duisburg-Essen.</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 13 Jun 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/5d16ca9c/419332f4.mp3" length="86494411" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3602</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><strong><em>This episode contains some descriptions of torture methods, automated human targeting by machines, and psychological warfare throughout<br></em></strong><br></p><p>Last week Alix hosted a live show in Mexico City right after <a href="https://realml.org/">REAL ML</a>. Four panellists discussed a huge important topic, which has been wrongfully deemed as taboo by other conferences: the use of AI and other technologies to support the ongoing genocide in Palestine.</p><p>Here’s a preview of what the four speakers shared:</p><ul><li><strong>Karen Palacio AKA kardaver</strong> gave us an overview of Operation Condor — a program of psychological warfare that ran in the late 20th century in South America to suppress activist voices.</li><li><strong>Marwa Fatafta</strong> explains how these methods are still used today against Palestinians; there are coordinated surveillance projects that make Palestinian citizens feel they are living in a panopticon, and the granular data storage and processing is facilitated by AWS, Google, and Azure.</li><li><strong>Matt Mahmoudi</strong> goes on to describe how these surveillance projects have crystallised into sophisticated CCTV and facial recognition networks through which Palestinians are continuously dehumanised via face-scanning and arbitrary checks that restrict movements.</li><li><strong>Wanda Muñez</strong> discusses how fully autonomous weapons obviously violate human rights in all kinds of ways — but ‘AI ethics’ frameworks never make any considerations for machines that make life or death decisions.</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://files.libcom.org/files/keith-breckenridge-biometric-state-the-global-politics-of-identification-and-surveillance-in-south-africa-1850-to-the-present.pdf">The Biometric State by Keith Breckenridge</a> — where the phrase ‘automated apartheid’ was conceived</li><li><a href="https://karen-pal.github.io/COGWAR_REPORT/realml.html">COGWAR Report</a> by Karen Palacio, AKA Kardaver</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</strong></p><p><strong><em>Wanda Muñez</em></strong><em> is an international consultant with twenty years of experience in the design, implementation and evaluation of programs and policies on human rights, gender equality, inclusion and the rights of people with disabilities. Wanda has worked for international NGOs and UN organizations in Asia, Africa, Europe and Latin America. She became involved in the field of artificial intelligence in 2017, initially through the analysis of its intersection with International Humanitarian Law in the issues of autonomous weapons systems; and later focusing on the intersection between human rights and AI. In 2020, she was nominated by the Ministry of Foreign Affairs of Mexico as an independent expert at the Global Alliance on Artificial Intelligence (GPAI), where she contributed to various publications and panels, and led the design of the research “Towards true gender equality and diversity in AI” that is currently being implemented. In 2020, Wanda Muñoz was recognized by the Nobel Women's Initiative as "a peacebuilder working for peace, justice and equality" and by UNLIREC as one of Latin America's "forces of change, working for humanitarian disarmament, non-proliferation and arms control. Wanda also just recently won </em><a href="https://www.linkedin.com/posts/womeninai_womeninai-deichampion-wai2025-activity-7336860776034377729-E-E1?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAvnKXYBjUxg9zyMxrqJbx1Jk2KrRg03L3w"><em>DEI Champion of Year</em></a><em> Award from </em><a href="https://www.womeninai.co/"><em>Women in AI</em></a><em>.<br></em><br></p><p><strong><em>Karen Palacio, aka kardaver</em></strong><em>, is an interdisciplinary digital artist, industrial programmer specialized in AI, and data scientist from Córdoba, Argentina. She researches and creates through iterative loops of implementation and reflection, aiming to understand what it means to articulate artistic-technological discourses from the Global South. Her performances, installations, and audiovisual works engage critically and rootedly with the depths of computation, the histories of computing and archives, freedom of knowledge, feminisms, and the pursuit of technological sovereignty. She develops and works with Free Software in her processes, resemanticizing technologies she knows from her background as an industrial programmer.<br></em><br></p><p><strong><em>Dr Matt Mahmoudi</em></strong><em> is Assistant Professor in Digital Humanities at the University of Cambridge, and a Researcher/Advisor on Artificial Intelligence and Human Rights at Amnesty International. Matt’s work has looked at AI-driven surveillance from the NYPD’s surveillance machine to Automated Apartheid in the occupied Palestinian territory. Matt is author of Migrants in the Digital Periphery: New Urban Frontiers of Controls (University of California Press, February 2025), and co-editor of Resisting Borders &amp; Technologies of Violence (Haymarket, 2024) together with Mizue Aizeki and Coline Schupfer.<br></em><br></p><p><strong><em>Marwa Fatafta</em></strong><em> leads Access Now’s policy and advocacy work on digital rights in the Middle East and North Africa (MENA) region. Her work spans a number of issues at the nexus of human rights and technology including content governance and platform accountability, online censorship, digital surveillance, and transnational repression. She has written extensively on the digital occupation in Palestine and focuses on the role of new technologies in armed conflicts and humanitarian contexts and their impact on historically marginalized and oppressed communities. Marwa is a Policy Analyst at Al-Shabaka: The Palestinian Policy Network, an advisory board member of the Tahrir Institute for Middle East Policy, and an advisory committee member for Bread&amp;Net. Marwa was a Fulbright scholar in the US and holds an MA in International Relations from Maxwell School of Citizenship and Public Affairs, Syracuse University. She holds a second MA in Development and Governance from the University of Duisburg-Essen.</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/5d16ca9c/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Logging Off w/ Adele Walton</title>
      <itunes:episode>58</itunes:episode>
      <podcast:episode>58</podcast:episode>
      <itunes:title>Logging Off w/ Adele Walton</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3916ab76-46f5-4466-922e-d488c7f6cfac</guid>
      <link>https://share.transistor.fm/s/c59dcfcd</link>
      <description>
        <![CDATA[<p>Adele Walton’s new book <a href="https://blackwells.co.uk/bookshop/product/Logging-Off-by-Adele-Zeynep-Walton/9781398722927">*Logging Off: The Human Cost of our Digital World</a>* is out NOW — for this week’s episode Alix sat down with her to discuss the book, and what pushed her to write it.</p><p>Adele shares her experiences of using social media from age ten, and growing up only ever feeling ‘understood’ by her followers. And now, the constant ‘how can I make content out of this??’ mindset has followed her into adult life.</p><p>Adele has been severely effected by online harms through the loss of her sister, and is working to use her lived experiences in her campaigning and advocacy work. The answer for Adele has never been to go full Luddite and reject social media — rather she wants to make online spaces safer for everyone.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Buy Adele’s book: <a href="https://blackwells.co.uk/bookshop/product/Logging-Off-by-Adele-Zeynep-Walton/9781398722927">Logging Off: The Human Cost of our Digital World</a></li><li><a href="https://www.theatlantic.com/technology/archive/2012/01/the-facebook-eye/251377/">The Facebook Eye</a> by Nathan Jurgeson — 2012 article from The Atlantic</li><li><a href="https://smartphonefreechildhood.co.uk/">Smartphone Free Childhood</a></li><li><a href="https://www.ripplesuicideprevention.com/information/install">Ripple</a> — a suicide prevention browser extension</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Adele Zeynep Walton is a British Turkish journalist, online safety campaigner and the author of Logging Off: The Human Cost of Our Digital World. She is a campaigner with Bereaved Families for Online Safety, youth ambassador for People Vs Big Tech, and a founding member of the EU youth movement Ctrl + Alt + Reclaim.She is the founder of Logging Off Club, a community which brings people together offline at phone free events to reconnect with themselves and others across the UK. As a Gen Z who grew up on social media, Adele regularly speaks about digital wellbeing, social connection and rebuilding empathy in a polarised world.Adele has written for The Guardian, The Independent, the i, Dazed, i-D, VICE, Metro, Refinery 29, The Big Issue, Jacobin, Open Democracy, gal-dem, Computer Weekly and more. Her articles have been translated into Brazilian Portuguese, German, Italian, Swedish, Turkish and Spanish, and she has been interviewed on Times Radio, LBC Radio, Sky News, BBC Radio Scotland and Channel 4 News and more. Between 2023-2024 Adele was DAZED's first ever political book columnist, where she has interviewed authors including Naomi Klein, Emma Dabiri, Vicky Spratt and more.</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Adele Walton’s new book <a href="https://blackwells.co.uk/bookshop/product/Logging-Off-by-Adele-Zeynep-Walton/9781398722927">*Logging Off: The Human Cost of our Digital World</a>* is out NOW — for this week’s episode Alix sat down with her to discuss the book, and what pushed her to write it.</p><p>Adele shares her experiences of using social media from age ten, and growing up only ever feeling ‘understood’ by her followers. And now, the constant ‘how can I make content out of this??’ mindset has followed her into adult life.</p><p>Adele has been severely effected by online harms through the loss of her sister, and is working to use her lived experiences in her campaigning and advocacy work. The answer for Adele has never been to go full Luddite and reject social media — rather she wants to make online spaces safer for everyone.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Buy Adele’s book: <a href="https://blackwells.co.uk/bookshop/product/Logging-Off-by-Adele-Zeynep-Walton/9781398722927">Logging Off: The Human Cost of our Digital World</a></li><li><a href="https://www.theatlantic.com/technology/archive/2012/01/the-facebook-eye/251377/">The Facebook Eye</a> by Nathan Jurgeson — 2012 article from The Atlantic</li><li><a href="https://smartphonefreechildhood.co.uk/">Smartphone Free Childhood</a></li><li><a href="https://www.ripplesuicideprevention.com/information/install">Ripple</a> — a suicide prevention browser extension</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Adele Zeynep Walton is a British Turkish journalist, online safety campaigner and the author of Logging Off: The Human Cost of Our Digital World. She is a campaigner with Bereaved Families for Online Safety, youth ambassador for People Vs Big Tech, and a founding member of the EU youth movement Ctrl + Alt + Reclaim.She is the founder of Logging Off Club, a community which brings people together offline at phone free events to reconnect with themselves and others across the UK. As a Gen Z who grew up on social media, Adele regularly speaks about digital wellbeing, social connection and rebuilding empathy in a polarised world.Adele has written for The Guardian, The Independent, the i, Dazed, i-D, VICE, Metro, Refinery 29, The Big Issue, Jacobin, Open Democracy, gal-dem, Computer Weekly and more. Her articles have been translated into Brazilian Portuguese, German, Italian, Swedish, Turkish and Spanish, and she has been interviewed on Times Radio, LBC Radio, Sky News, BBC Radio Scotland and Channel 4 News and more. Between 2023-2024 Adele was DAZED's first ever political book columnist, where she has interviewed authors including Naomi Klein, Emma Dabiri, Vicky Spratt and more.</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 06 Jun 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/c59dcfcd/4b370548.mp3" length="63343910" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2637</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Adele Walton’s new book <a href="https://blackwells.co.uk/bookshop/product/Logging-Off-by-Adele-Zeynep-Walton/9781398722927">*Logging Off: The Human Cost of our Digital World</a>* is out NOW — for this week’s episode Alix sat down with her to discuss the book, and what pushed her to write it.</p><p>Adele shares her experiences of using social media from age ten, and growing up only ever feeling ‘understood’ by her followers. And now, the constant ‘how can I make content out of this??’ mindset has followed her into adult life.</p><p>Adele has been severely effected by online harms through the loss of her sister, and is working to use her lived experiences in her campaigning and advocacy work. The answer for Adele has never been to go full Luddite and reject social media — rather she wants to make online spaces safer for everyone.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>Buy Adele’s book: <a href="https://blackwells.co.uk/bookshop/product/Logging-Off-by-Adele-Zeynep-Walton/9781398722927">Logging Off: The Human Cost of our Digital World</a></li><li><a href="https://www.theatlantic.com/technology/archive/2012/01/the-facebook-eye/251377/">The Facebook Eye</a> by Nathan Jurgeson — 2012 article from The Atlantic</li><li><a href="https://smartphonefreechildhood.co.uk/">Smartphone Free Childhood</a></li><li><a href="https://www.ripplesuicideprevention.com/information/install">Ripple</a> — a suicide prevention browser extension</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Adele Zeynep Walton is a British Turkish journalist, online safety campaigner and the author of Logging Off: The Human Cost of Our Digital World. She is a campaigner with Bereaved Families for Online Safety, youth ambassador for People Vs Big Tech, and a founding member of the EU youth movement Ctrl + Alt + Reclaim.She is the founder of Logging Off Club, a community which brings people together offline at phone free events to reconnect with themselves and others across the UK. As a Gen Z who grew up on social media, Adele regularly speaks about digital wellbeing, social connection and rebuilding empathy in a polarised world.Adele has written for The Guardian, The Independent, the i, Dazed, i-D, VICE, Metro, Refinery 29, The Big Issue, Jacobin, Open Democracy, gal-dem, Computer Weekly and more. Her articles have been translated into Brazilian Portuguese, German, Italian, Swedish, Turkish and Spanish, and she has been interviewed on Times Radio, LBC Radio, Sky News, BBC Radio Scotland and Channel 4 News and more. Between 2023-2024 Adele was DAZED's first ever political book columnist, where she has interviewed authors including Naomi Klein, Emma Dabiri, Vicky Spratt and more.</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/c59dcfcd/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Short: Sam Altman’s World w/ Billy Perrigo</title>
      <itunes:episode>57</itunes:episode>
      <podcast:episode>57</podcast:episode>
      <itunes:title>Short: Sam Altman’s World w/ Billy Perrigo</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1b9f7c1f-7e59-41b4-97cb-8e638f7be478</guid>
      <link>https://share.transistor.fm/s/5d03a952</link>
      <description>
        <![CDATA[<p>Sam Altman is doing another big infrastructure push with World (previously Worldcoin) — the universal human verification system.</p><p>We had journalist Billy Perrigo on to chat what’s what with World. Is Sam Altman just providing a solution to a problem that he himself caused with OpenAI? Do we really need human verification, or is this just a way to side-step the AI content watermarking issue?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://time.com/7288387/sam-altman-orb-tools-for-humanity/">The Orb Will See You Now</a> by Billy Perrigo</li><li><a href="https://www.axios.com/2024/04/19/ai-agents-assistants-ethics-alignment-google">The ethical implications of AI agents</a> by DeepMind</li></ul><p><strong>Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email </strong><a href="mailto:pod@saysmaybe.com"><strong>pod@saysmaybe.com</strong></a><strong><br></strong><br></p><p><em>Perrigo is a correspondent at TIME, based in the London bureau. He covers the tech industry, focusing on the companies reshaping our world in strange and unexpected ways. His investigation ‘</em><a href="https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/"><em>Inside Facebook’s African Sweatshop</em></a><em>’ was a finalist for the 2022 Orwell Prize.</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Sam Altman is doing another big infrastructure push with World (previously Worldcoin) — the universal human verification system.</p><p>We had journalist Billy Perrigo on to chat what’s what with World. Is Sam Altman just providing a solution to a problem that he himself caused with OpenAI? Do we really need human verification, or is this just a way to side-step the AI content watermarking issue?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://time.com/7288387/sam-altman-orb-tools-for-humanity/">The Orb Will See You Now</a> by Billy Perrigo</li><li><a href="https://www.axios.com/2024/04/19/ai-agents-assistants-ethics-alignment-google">The ethical implications of AI agents</a> by DeepMind</li></ul><p><strong>Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email </strong><a href="mailto:pod@saysmaybe.com"><strong>pod@saysmaybe.com</strong></a><strong><br></strong><br></p><p><em>Perrigo is a correspondent at TIME, based in the London bureau. He covers the tech industry, focusing on the companies reshaping our world in strange and unexpected ways. His investigation ‘</em><a href="https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/"><em>Inside Facebook’s African Sweatshop</em></a><em>’ was a finalist for the 2022 Orwell Prize.</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 04 Jun 2025 16:42:55 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/5d03a952/03d3758d.mp3" length="30115192" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>1253</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Sam Altman is doing another big infrastructure push with World (previously Worldcoin) — the universal human verification system.</p><p>We had journalist Billy Perrigo on to chat what’s what with World. Is Sam Altman just providing a solution to a problem that he himself caused with OpenAI? Do we really need human verification, or is this just a way to side-step the AI content watermarking issue?</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://time.com/7288387/sam-altman-orb-tools-for-humanity/">The Orb Will See You Now</a> by Billy Perrigo</li><li><a href="https://www.axios.com/2024/04/19/ai-agents-assistants-ethics-alignment-google">The ethical implications of AI agents</a> by DeepMind</li></ul><p><strong>Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email </strong><a href="mailto:pod@saysmaybe.com"><strong>pod@saysmaybe.com</strong></a><strong><br></strong><br></p><p><em>Perrigo is a correspondent at TIME, based in the London bureau. He covers the tech industry, focusing on the companies reshaping our world in strange and unexpected ways. His investigation ‘</em><a href="https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/"><em>Inside Facebook’s African Sweatshop</em></a><em>’ was a finalist for the 2022 Orwell Prize.</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/5d03a952/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>The Collective Intelligence Project w/ Divya Siddarth and Zarinah Agnew</title>
      <itunes:episode>56</itunes:episode>
      <podcast:episode>56</podcast:episode>
      <itunes:title>The Collective Intelligence Project w/ Divya Siddarth and Zarinah Agnew</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1591f457-699a-4645-a5e5-d08855ca586d</guid>
      <link>https://share.transistor.fm/s/bf0d5209</link>
      <description>
        <![CDATA[<p>Most of the time we interview people who say No to AI. In this interview, Georgia and Alix talk to two people who look at AI and ask How and For What. And lots of other questions too.</p><p>Divya Siddarth and Zarinah Agnew from the Collective Intelligence Project share CIP’s work using AI systems to explore more consultative democratic governance, how to reframe the social and relational of knowledge, to pull our thinking out of the individual frame and into collective and communal applications.</p><p>In Zarinah’s words, they are interested in what happens “between brains, not within brains”. A ‘community chat bot’ might sound cringe but Divya and Zarinah are doing work to make these valuable and useful, rather than addictive and sycophantic. If you’re skeptical of the utility of engaging in these toxic corporate towers of AI at all, this is an episode for you.</p><p><strong>Further reading &amp; resources</strong></p><ul><li><a href="https://www.thenewatlantis.com/publications/why-we-need-amistics-for-ai">Why We Need an Amistics for AI</a> by Brain Boyd</li><li><a href="https://www.cip.org/blog/ccai">Collective constitutional AI project</a> with CIP and Anthropic</li><li><a href="https://www.cip.org/globaldialogues">Global Dialogues</a> launch announcement</li><li><a href="https://www.404media.co/i-tested-the-ai-that-calls-your-elderly-parents-if-you-cant-bothered/">I Tested The AI That Calls Your Elderly Parents If You Can't Be Bothered</a> by Joseph Cox from 404 Media</li><li><a href="https://www.saysmaybe.com/podcast/worker-power-big-tech-bossmen-w-david-seligman">Worker Power &amp; Big Tech Bossmen w/ David Seligman</a></li><li><a href="https://billyperrigo.substack.com/p/the-orb-will-see-you-now?utm_source=post-email-title&amp;publication_id=2172989&amp;post_id=164545631&amp;utm_campaign=email-post-title&amp;isFreemail=true&amp;r=qikw&amp;triedRedirect=true&amp;utm_medium=email">The Orb Will See You Now</a> by Billy Perrigo</li><li><a href="https://shorensteincenter.org/commentary/intimacy-dividend-ai-might-transform-news-media-consumption/">The Intimacy Dividend</a> by Shuwei Fang</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Divya Siddarth is the executive director and co-founder of CIP. Previously, she has been a political economist and social technologist in Microsoft’s Office of the CTO, the AI and Democracy lead at the U.K.’s AI Safety Institute, and held positions at the Ethics in AI Institute at Oxford, the Ostrom Workshop, and the Harvard Safra Center. She graduated from Stanford with a B.S. in Computational Decision Analysis in 2018.<br></em><br></p><p><em>Zarinah is Research Director at the Collective Intelligence Project, where they work on transforming public input into impactful change in the AI ecosystem. Previously a neuroscientist, Zarinah now focuses on the science of collectivity and emerging related technologies. Zarinah is faculty at the London College of Political Technology where they teach on Future Crafting. In their spare time, some might argue, they run too many non-profits.</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Most of the time we interview people who say No to AI. In this interview, Georgia and Alix talk to two people who look at AI and ask How and For What. And lots of other questions too.</p><p>Divya Siddarth and Zarinah Agnew from the Collective Intelligence Project share CIP’s work using AI systems to explore more consultative democratic governance, how to reframe the social and relational of knowledge, to pull our thinking out of the individual frame and into collective and communal applications.</p><p>In Zarinah’s words, they are interested in what happens “between brains, not within brains”. A ‘community chat bot’ might sound cringe but Divya and Zarinah are doing work to make these valuable and useful, rather than addictive and sycophantic. If you’re skeptical of the utility of engaging in these toxic corporate towers of AI at all, this is an episode for you.</p><p><strong>Further reading &amp; resources</strong></p><ul><li><a href="https://www.thenewatlantis.com/publications/why-we-need-amistics-for-ai">Why We Need an Amistics for AI</a> by Brain Boyd</li><li><a href="https://www.cip.org/blog/ccai">Collective constitutional AI project</a> with CIP and Anthropic</li><li><a href="https://www.cip.org/globaldialogues">Global Dialogues</a> launch announcement</li><li><a href="https://www.404media.co/i-tested-the-ai-that-calls-your-elderly-parents-if-you-cant-bothered/">I Tested The AI That Calls Your Elderly Parents If You Can't Be Bothered</a> by Joseph Cox from 404 Media</li><li><a href="https://www.saysmaybe.com/podcast/worker-power-big-tech-bossmen-w-david-seligman">Worker Power &amp; Big Tech Bossmen w/ David Seligman</a></li><li><a href="https://billyperrigo.substack.com/p/the-orb-will-see-you-now?utm_source=post-email-title&amp;publication_id=2172989&amp;post_id=164545631&amp;utm_campaign=email-post-title&amp;isFreemail=true&amp;r=qikw&amp;triedRedirect=true&amp;utm_medium=email">The Orb Will See You Now</a> by Billy Perrigo</li><li><a href="https://shorensteincenter.org/commentary/intimacy-dividend-ai-might-transform-news-media-consumption/">The Intimacy Dividend</a> by Shuwei Fang</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Divya Siddarth is the executive director and co-founder of CIP. Previously, she has been a political economist and social technologist in Microsoft’s Office of the CTO, the AI and Democracy lead at the U.K.’s AI Safety Institute, and held positions at the Ethics in AI Institute at Oxford, the Ostrom Workshop, and the Harvard Safra Center. She graduated from Stanford with a B.S. in Computational Decision Analysis in 2018.<br></em><br></p><p><em>Zarinah is Research Director at the Collective Intelligence Project, where they work on transforming public input into impactful change in the AI ecosystem. Previously a neuroscientist, Zarinah now focuses on the science of collectivity and emerging related technologies. Zarinah is faculty at the London College of Political Technology where they teach on Future Crafting. In their spare time, some might argue, they run too many non-profits.</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 30 May 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/bf0d5209/cecb9334.mp3" length="78546492" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3271</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Most of the time we interview people who say No to AI. In this interview, Georgia and Alix talk to two people who look at AI and ask How and For What. And lots of other questions too.</p><p>Divya Siddarth and Zarinah Agnew from the Collective Intelligence Project share CIP’s work using AI systems to explore more consultative democratic governance, how to reframe the social and relational of knowledge, to pull our thinking out of the individual frame and into collective and communal applications.</p><p>In Zarinah’s words, they are interested in what happens “between brains, not within brains”. A ‘community chat bot’ might sound cringe but Divya and Zarinah are doing work to make these valuable and useful, rather than addictive and sycophantic. If you’re skeptical of the utility of engaging in these toxic corporate towers of AI at all, this is an episode for you.</p><p><strong>Further reading &amp; resources</strong></p><ul><li><a href="https://www.thenewatlantis.com/publications/why-we-need-amistics-for-ai">Why We Need an Amistics for AI</a> by Brain Boyd</li><li><a href="https://www.cip.org/blog/ccai">Collective constitutional AI project</a> with CIP and Anthropic</li><li><a href="https://www.cip.org/globaldialogues">Global Dialogues</a> launch announcement</li><li><a href="https://www.404media.co/i-tested-the-ai-that-calls-your-elderly-parents-if-you-cant-bothered/">I Tested The AI That Calls Your Elderly Parents If You Can't Be Bothered</a> by Joseph Cox from 404 Media</li><li><a href="https://www.saysmaybe.com/podcast/worker-power-big-tech-bossmen-w-david-seligman">Worker Power &amp; Big Tech Bossmen w/ David Seligman</a></li><li><a href="https://billyperrigo.substack.com/p/the-orb-will-see-you-now?utm_source=post-email-title&amp;publication_id=2172989&amp;post_id=164545631&amp;utm_campaign=email-post-title&amp;isFreemail=true&amp;r=qikw&amp;triedRedirect=true&amp;utm_medium=email">The Orb Will See You Now</a> by Billy Perrigo</li><li><a href="https://shorensteincenter.org/commentary/intimacy-dividend-ai-might-transform-news-media-consumption/">The Intimacy Dividend</a> by Shuwei Fang</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Divya Siddarth is the executive director and co-founder of CIP. Previously, she has been a political economist and social technologist in Microsoft’s Office of the CTO, the AI and Democracy lead at the U.K.’s AI Safety Institute, and held positions at the Ethics in AI Institute at Oxford, the Ostrom Workshop, and the Harvard Safra Center. She graduated from Stanford with a B.S. in Computational Decision Analysis in 2018.<br></em><br></p><p><em>Zarinah is Research Director at the Collective Intelligence Project, where they work on transforming public input into impactful change in the AI ecosystem. Previously a neuroscientist, Zarinah now focuses on the science of collectivity and emerging related technologies. Zarinah is faculty at the London College of Political Technology where they teach on Future Crafting. In their spare time, some might argue, they run too many non-profits.</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/bf0d5209/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Net0++: Data Center Sprawl | NEW Research from The Maybe</title>
      <itunes:episode>55</itunes:episode>
      <podcast:episode>55</podcast:episode>
      <itunes:title>Net0++: Data Center Sprawl | NEW Research from The Maybe</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ac3be5a5-be07-403d-96c8-7aaaea00c6dc</guid>
      <link>https://share.transistor.fm/s/aef905bf</link>
      <description>
        <![CDATA[<p>We’re excited to finally share our report on data center expansion and resistance around the world. It’s been a labor of love, but also showcases the amazing work of many organisations, activists, and journalists around the world that are working to create space for meaningful consultation about hugely consequential decisions. <a href="https://themaybe.org/research/data-center-report-where-cloud-meets-cement">Download it here</a>.</p><p>In short, the report includes five case studies on data centre development across the globe. We were focused on understanding how companies approach policymakers, what information is made available to communities, how decisions are made to develop data centers, and when communities decide to resist their development, and what the outcomes have been.</p><p>The ONE big similarity across all case studies is that information about data centre development was consistently hard to find: accessing information about environmental impacts, urban planning, and even the identity of the companies proposing these projects, has been almost impossible to uncover.</p><p>We end the report with some recommendations for how to increase transparency and crack open democratic consultation of communities on the front lines of this behemoth tech infrastructure.</p><p><strong>Further reading:</strong></p><ul><li><a href="https://themaybe.org/research/data-center-report-where-cloud-meets-cement">Read the report here!</a></li><li><a href="https://www.youtube.com/watch?v=DGjj7wDYaiI">A short More Perfect Union doc about living 400 yards from a data center</a></li><li><a href="https://www.datacenterdynamics.com/en/">Data Center Dynamics</a></li><li><a href="https://www.techpolicy.press/xais-memphis-neighbors-push-for-facts-and-fairness/">xAI's Memphis Neighbors Push for Facts and Fairness</a> from Tech Policy Press</li><li>If you have any thoughts or feedback about the report, please email <a href="mailto:research@themaybe.org">research@themaybe.org</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get invites for community calls around data centre resistance.**</p><p><a href="https://chrisscameron.com/">*Chris Cameron</a> has been a scientist and researcher for over a decade and has been working in environmental justice policy since 2021. Her interest in investigating human rights violations related to environmental injustices has led to her current research into strategic litigation support for communities experiencing harm from data centers. Chris’s previous work has centered around co-designing projects with communities related to environmental rights advocacy and digital storytelling. She also hosts a radio show called Sound Ecology, a space for climate-oriented artists to share their sonic investigations as toolkits for the climate collapse. Contact Chris at <a href="mailto:cameroncscoop@gmail.com">cameroncscoop@gmail.com</a> to speak more about data center litigation strategies and the intersection of technology and environmental justice.*</p><p><a href="https://prathm.com/">*Prathm Juneja</a> is the Research Strategist at The Maybe and a PhD Candidate in Social Data Science at the University of Oxford, where his research examines, from a technical and ethical perspective, AI &amp; Elections. He works at the intersection of AI, research, industry, and politics, spending most of his time advising governments, civil society organizations, and companies on civic tech and tech policy.*</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>We’re excited to finally share our report on data center expansion and resistance around the world. It’s been a labor of love, but also showcases the amazing work of many organisations, activists, and journalists around the world that are working to create space for meaningful consultation about hugely consequential decisions. <a href="https://themaybe.org/research/data-center-report-where-cloud-meets-cement">Download it here</a>.</p><p>In short, the report includes five case studies on data centre development across the globe. We were focused on understanding how companies approach policymakers, what information is made available to communities, how decisions are made to develop data centers, and when communities decide to resist their development, and what the outcomes have been.</p><p>The ONE big similarity across all case studies is that information about data centre development was consistently hard to find: accessing information about environmental impacts, urban planning, and even the identity of the companies proposing these projects, has been almost impossible to uncover.</p><p>We end the report with some recommendations for how to increase transparency and crack open democratic consultation of communities on the front lines of this behemoth tech infrastructure.</p><p><strong>Further reading:</strong></p><ul><li><a href="https://themaybe.org/research/data-center-report-where-cloud-meets-cement">Read the report here!</a></li><li><a href="https://www.youtube.com/watch?v=DGjj7wDYaiI">A short More Perfect Union doc about living 400 yards from a data center</a></li><li><a href="https://www.datacenterdynamics.com/en/">Data Center Dynamics</a></li><li><a href="https://www.techpolicy.press/xais-memphis-neighbors-push-for-facts-and-fairness/">xAI's Memphis Neighbors Push for Facts and Fairness</a> from Tech Policy Press</li><li>If you have any thoughts or feedback about the report, please email <a href="mailto:research@themaybe.org">research@themaybe.org</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get invites for community calls around data centre resistance.**</p><p><a href="https://chrisscameron.com/">*Chris Cameron</a> has been a scientist and researcher for over a decade and has been working in environmental justice policy since 2021. Her interest in investigating human rights violations related to environmental injustices has led to her current research into strategic litigation support for communities experiencing harm from data centers. Chris’s previous work has centered around co-designing projects with communities related to environmental rights advocacy and digital storytelling. She also hosts a radio show called Sound Ecology, a space for climate-oriented artists to share their sonic investigations as toolkits for the climate collapse. Contact Chris at <a href="mailto:cameroncscoop@gmail.com">cameroncscoop@gmail.com</a> to speak more about data center litigation strategies and the intersection of technology and environmental justice.*</p><p><a href="https://prathm.com/">*Prathm Juneja</a> is the Research Strategist at The Maybe and a PhD Candidate in Social Data Science at the University of Oxford, where his research examines, from a technical and ethical perspective, AI &amp; Elections. He works at the intersection of AI, research, industry, and politics, spending most of his time advising governments, civil society organizations, and companies on civic tech and tech policy.*</p>]]>
      </content:encoded>
      <pubDate>Fri, 23 May 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/aef905bf/52da1535.mp3" length="78036022" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3249</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>We’re excited to finally share our report on data center expansion and resistance around the world. It’s been a labor of love, but also showcases the amazing work of many organisations, activists, and journalists around the world that are working to create space for meaningful consultation about hugely consequential decisions. <a href="https://themaybe.org/research/data-center-report-where-cloud-meets-cement">Download it here</a>.</p><p>In short, the report includes five case studies on data centre development across the globe. We were focused on understanding how companies approach policymakers, what information is made available to communities, how decisions are made to develop data centers, and when communities decide to resist their development, and what the outcomes have been.</p><p>The ONE big similarity across all case studies is that information about data centre development was consistently hard to find: accessing information about environmental impacts, urban planning, and even the identity of the companies proposing these projects, has been almost impossible to uncover.</p><p>We end the report with some recommendations for how to increase transparency and crack open democratic consultation of communities on the front lines of this behemoth tech infrastructure.</p><p><strong>Further reading:</strong></p><ul><li><a href="https://themaybe.org/research/data-center-report-where-cloud-meets-cement">Read the report here!</a></li><li><a href="https://www.youtube.com/watch?v=DGjj7wDYaiI">A short More Perfect Union doc about living 400 yards from a data center</a></li><li><a href="https://www.datacenterdynamics.com/en/">Data Center Dynamics</a></li><li><a href="https://www.techpolicy.press/xais-memphis-neighbors-push-for-facts-and-fairness/">xAI's Memphis Neighbors Push for Facts and Fairness</a> from Tech Policy Press</li><li>If you have any thoughts or feedback about the report, please email <a href="mailto:research@themaybe.org">research@themaybe.org</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get invites for community calls around data centre resistance.**</p><p><a href="https://chrisscameron.com/">*Chris Cameron</a> has been a scientist and researcher for over a decade and has been working in environmental justice policy since 2021. Her interest in investigating human rights violations related to environmental injustices has led to her current research into strategic litigation support for communities experiencing harm from data centers. Chris’s previous work has centered around co-designing projects with communities related to environmental rights advocacy and digital storytelling. She also hosts a radio show called Sound Ecology, a space for climate-oriented artists to share their sonic investigations as toolkits for the climate collapse. Contact Chris at <a href="mailto:cameroncscoop@gmail.com">cameroncscoop@gmail.com</a> to speak more about data center litigation strategies and the intersection of technology and environmental justice.*</p><p><a href="https://prathm.com/">*Prathm Juneja</a> is the Research Strategist at The Maybe and a PhD Candidate in Social Data Science at the University of Oxford, where his research examines, from a technical and ethical perspective, AI &amp; Elections. He works at the intersection of AI, research, industry, and politics, spending most of his time advising governments, civil society organizations, and companies on civic tech and tech policy.*</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/aef905bf/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Net 0++: AI Thirst in a Water-Scarce World w/ Julie McCarthy</title>
      <itunes:episode>54</itunes:episode>
      <podcast:episode>54</podcast:episode>
      <itunes:title>Net 0++: AI Thirst in a Water-Scarce World w/ Julie McCarthy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f4bc1d30-6fad-40f0-a55a-e43620e0f16c</guid>
      <link>https://share.transistor.fm/s/deb10048</link>
      <description>
        <![CDATA[<p>Last year, Elon Musk’s xAI <a href="https://time.com/7021709/elon-musk-xai-grok-memphis/">built a data centre in Memphis in 19 days</a> — and the local government only found out about it on the 20th day. How?</p><p>Julie McCarthy and her team at NatureFinance have just released a report about the nature-related impacts of data center development globally. There are some pretty dire statistics in there: 55% of data centers are developed in areas that are already at risk of drought. So why do they get built there?</p><p>Julie also shares the longer arc of her career, which began in extractive industry transparency, and included time leading the Open Government Partnership, and the Economic Justice Program at Open Society Foundations. She brings all of that experience together for an insightful conversation about what iss happening with tech infrastructure expansion and what we should do about it.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.kateraworth.com/doughnut/">Kate Raworth’s Doughnut Economics</a></li><li><a href="https://www.naturefinance.net/">NatureFinance website</a></li><li><a href="https://www.naturefinance.net/resources-tools/navigating-ais-thirst-in-a-water-scarce-world/">Navigating AI’s Thirst in a Water-Scarce World</a> — by NatureFinance</li><li><a href="https://time.com/7021709/elon-musk-xai-grok-memphis/">Elon Musk building an xAI data centre in 19 days</a> — report by Time Magazine</li><li><a href="https://www.opensocietyfoundations.org/voices/topics/economic-justice">OSF’s Economic Justice Programme</a></li><li><a href="https://marianamazzucato.com/books/the-entrepreneurial-state/">The Entrepreneurial State</a> by Mariana Mazzucato</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Julie is NatureFinance’s CEO.  She was founding co-director of the Open Society Foundations’ (OSF) Economic Justice Program, a $100 million per annum global grantmaking and impact investment program focused on issues of fiscal justice, workers’ rights, and corporate power.  Previous roles include serving as the founding director of the Open Government Partnership (OGP), and as a Franklin Fellow and peacebuilding adviser at the U.S. Mission to the United Nations, focused on Liberia. Prior to this, McCarthy co-founded the Natural Resource Governance Institute (NRGI), serving as its deputy director until 2009. She is a Brookings non-resident fellow in the Center for Sustainable Development, and an Aspen Civil Society Fellow. Julie lives with her three children in Warwick, NY.</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Last year, Elon Musk’s xAI <a href="https://time.com/7021709/elon-musk-xai-grok-memphis/">built a data centre in Memphis in 19 days</a> — and the local government only found out about it on the 20th day. How?</p><p>Julie McCarthy and her team at NatureFinance have just released a report about the nature-related impacts of data center development globally. There are some pretty dire statistics in there: 55% of data centers are developed in areas that are already at risk of drought. So why do they get built there?</p><p>Julie also shares the longer arc of her career, which began in extractive industry transparency, and included time leading the Open Government Partnership, and the Economic Justice Program at Open Society Foundations. She brings all of that experience together for an insightful conversation about what iss happening with tech infrastructure expansion and what we should do about it.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.kateraworth.com/doughnut/">Kate Raworth’s Doughnut Economics</a></li><li><a href="https://www.naturefinance.net/">NatureFinance website</a></li><li><a href="https://www.naturefinance.net/resources-tools/navigating-ais-thirst-in-a-water-scarce-world/">Navigating AI’s Thirst in a Water-Scarce World</a> — by NatureFinance</li><li><a href="https://time.com/7021709/elon-musk-xai-grok-memphis/">Elon Musk building an xAI data centre in 19 days</a> — report by Time Magazine</li><li><a href="https://www.opensocietyfoundations.org/voices/topics/economic-justice">OSF’s Economic Justice Programme</a></li><li><a href="https://marianamazzucato.com/books/the-entrepreneurial-state/">The Entrepreneurial State</a> by Mariana Mazzucato</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Julie is NatureFinance’s CEO.  She was founding co-director of the Open Society Foundations’ (OSF) Economic Justice Program, a $100 million per annum global grantmaking and impact investment program focused on issues of fiscal justice, workers’ rights, and corporate power.  Previous roles include serving as the founding director of the Open Government Partnership (OGP), and as a Franklin Fellow and peacebuilding adviser at the U.S. Mission to the United Nations, focused on Liberia. Prior to this, McCarthy co-founded the Natural Resource Governance Institute (NRGI), serving as its deputy director until 2009. She is a Brookings non-resident fellow in the Center for Sustainable Development, and an Aspen Civil Society Fellow. Julie lives with her three children in Warwick, NY.</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 16 May 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/deb10048/9db62e4b.mp3" length="77077107" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3209</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Last year, Elon Musk’s xAI <a href="https://time.com/7021709/elon-musk-xai-grok-memphis/">built a data centre in Memphis in 19 days</a> — and the local government only found out about it on the 20th day. How?</p><p>Julie McCarthy and her team at NatureFinance have just released a report about the nature-related impacts of data center development globally. There are some pretty dire statistics in there: 55% of data centers are developed in areas that are already at risk of drought. So why do they get built there?</p><p>Julie also shares the longer arc of her career, which began in extractive industry transparency, and included time leading the Open Government Partnership, and the Economic Justice Program at Open Society Foundations. She brings all of that experience together for an insightful conversation about what iss happening with tech infrastructure expansion and what we should do about it.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.kateraworth.com/doughnut/">Kate Raworth’s Doughnut Economics</a></li><li><a href="https://www.naturefinance.net/">NatureFinance website</a></li><li><a href="https://www.naturefinance.net/resources-tools/navigating-ais-thirst-in-a-water-scarce-world/">Navigating AI’s Thirst in a Water-Scarce World</a> — by NatureFinance</li><li><a href="https://time.com/7021709/elon-musk-xai-grok-memphis/">Elon Musk building an xAI data centre in 19 days</a> — report by Time Magazine</li><li><a href="https://www.opensocietyfoundations.org/voices/topics/economic-justice">OSF’s Economic Justice Programme</a></li><li><a href="https://marianamazzucato.com/books/the-entrepreneurial-state/">The Entrepreneurial State</a> by Mariana Mazzucato</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Julie is NatureFinance’s CEO.  She was founding co-director of the Open Society Foundations’ (OSF) Economic Justice Program, a $100 million per annum global grantmaking and impact investment program focused on issues of fiscal justice, workers’ rights, and corporate power.  Previous roles include serving as the founding director of the Open Government Partnership (OGP), and as a Franklin Fellow and peacebuilding adviser at the U.S. Mission to the United Nations, focused on Liberia. Prior to this, McCarthy co-founded the Natural Resource Governance Institute (NRGI), serving as its deputy director until 2009. She is a Brookings non-resident fellow in the Center for Sustainable Development, and an Aspen Civil Society Fellow. Julie lives with her three children in Warwick, NY.</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/deb10048/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Short: Open AI for...Countries? w/ Marietje Schaake</title>
      <itunes:episode>53</itunes:episode>
      <podcast:episode>53</podcast:episode>
      <itunes:title>Short: Open AI for...Countries? w/ Marietje Schaake</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3be80c36-712e-4cd3-ab45-1df8846c363e</guid>
      <link>https://share.transistor.fm/s/b57b5a51</link>
      <description>
        <![CDATA[<p>This is another Computer Says Maybe short, this time with Marietje Schaake (author of <a href="https://www.csap.cam.ac.uk/events/s-t-lee-public-policy-lecture-tech-coup/">The Tech Coup</a>), to discuss <a href="https://openai.com/global-affairs/openai-for-countries/">OpenAI’s recent announcement</a>: they want to partner with governments all around the world to build ‘democratic AI rails’ — sounds bad!</p><p><strong>Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email </strong><a href="mailto:pod@saysmaybe.com"><strong>pod@saysmaybe.com</strong></a><strong><br></strong><br></p><p><em>Marietje Schaake is a non-resident Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of </em><a href="https://thetechcoup.com/"><em>**The Tech Coup</em></a><em>.</em>**</p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is another Computer Says Maybe short, this time with Marietje Schaake (author of <a href="https://www.csap.cam.ac.uk/events/s-t-lee-public-policy-lecture-tech-coup/">The Tech Coup</a>), to discuss <a href="https://openai.com/global-affairs/openai-for-countries/">OpenAI’s recent announcement</a>: they want to partner with governments all around the world to build ‘democratic AI rails’ — sounds bad!</p><p><strong>Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email </strong><a href="mailto:pod@saysmaybe.com"><strong>pod@saysmaybe.com</strong></a><strong><br></strong><br></p><p><em>Marietje Schaake is a non-resident Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of </em><a href="https://thetechcoup.com/"><em>**The Tech Coup</em></a><em>.</em>**</p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </content:encoded>
      <pubDate>Thu, 15 May 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/b57b5a51/e1861e5a.mp3" length="20013269" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>832</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is another Computer Says Maybe short, this time with Marietje Schaake (author of <a href="https://www.csap.cam.ac.uk/events/s-t-lee-public-policy-lecture-tech-coup/">The Tech Coup</a>), to discuss <a href="https://openai.com/global-affairs/openai-for-countries/">OpenAI’s recent announcement</a>: they want to partner with governments all around the world to build ‘democratic AI rails’ — sounds bad!</p><p><strong>Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email </strong><a href="mailto:pod@saysmaybe.com"><strong>pod@saysmaybe.com</strong></a><strong><br></strong><br></p><p><em>Marietje Schaake is a non-resident Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of </em><a href="https://thetechcoup.com/"><em>**The Tech Coup</em></a><em>.</em>**</p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b57b5a51/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Short: What Just Happened to 23andMe? w/ Jenny Reardon</title>
      <itunes:episode>52</itunes:episode>
      <podcast:episode>52</podcast:episode>
      <itunes:title>Short: What Just Happened to 23andMe? w/ Jenny Reardon</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c5255d85-0770-4e5f-b054-27a47f070390</guid>
      <link>https://share.transistor.fm/s/16b91431</link>
      <description>
        <![CDATA[<p>Personalised genotyping company 23andMe just went bankrupt — what’s gonna happen to all that genetic data?</p><p>We brought back genomics professor Jenny Reardon to discuss the crushing void that was 23andMe’s business model — and that many companies like it have failed before.</p><p><strong>This is a Computer Says Maybe Short, where we bring in an expert to give their take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email </strong><a href="mailto:pod@saysmaybe.com"><strong>pod@saysmaybe.com</strong></a></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://press.uchicago.edu/ucp/books/book/chicago/P/bo22726485.html">The Postgenomic Condition</a> by Jenny Reardon</li><li><a href="https://saysmaybe.com/podcast/power-over-precision-w-jenny-reardon">Power Over Precision</a> — Jenny’s first episode with us</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Jenny Reardon is a Professor of Sociology and the Founding Director of the Science and Justice Research Center at the University of California, Santa Cruz.  Her research draws into focus questions about identity, justice and democracy that are often silently embedded in scientific ideas and practices.  She is the author of </em><a href="https://press.princeton.edu/books/paperback/9780691118574/race-to-the-finish"><em>Race to the Finish: Identity and Governance in an Age of Genomics</em></a><em> (Princeton University Press) and, most recently, </em><a href="http://press.uchicago.edu/ucp/books/book/chicago/P/bo22726485.html"><em>The Postgenomic Condition: Ethics, Justice, Knowledge After the Genome</em></a><em> (University of Chicago Press)</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Personalised genotyping company 23andMe just went bankrupt — what’s gonna happen to all that genetic data?</p><p>We brought back genomics professor Jenny Reardon to discuss the crushing void that was 23andMe’s business model — and that many companies like it have failed before.</p><p><strong>This is a Computer Says Maybe Short, where we bring in an expert to give their take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email </strong><a href="mailto:pod@saysmaybe.com"><strong>pod@saysmaybe.com</strong></a></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://press.uchicago.edu/ucp/books/book/chicago/P/bo22726485.html">The Postgenomic Condition</a> by Jenny Reardon</li><li><a href="https://saysmaybe.com/podcast/power-over-precision-w-jenny-reardon">Power Over Precision</a> — Jenny’s first episode with us</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Jenny Reardon is a Professor of Sociology and the Founding Director of the Science and Justice Research Center at the University of California, Santa Cruz.  Her research draws into focus questions about identity, justice and democracy that are often silently embedded in scientific ideas and practices.  She is the author of </em><a href="https://press.princeton.edu/books/paperback/9780691118574/race-to-the-finish"><em>Race to the Finish: Identity and Governance in an Age of Genomics</em></a><em> (Princeton University Press) and, most recently, </em><a href="http://press.uchicago.edu/ucp/books/book/chicago/P/bo22726485.html"><em>The Postgenomic Condition: Ethics, Justice, Knowledge After the Genome</em></a><em> (University of Chicago Press)</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 13 May 2025 02:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/16b91431/a5783310.mp3" length="21606228" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>898</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Personalised genotyping company 23andMe just went bankrupt — what’s gonna happen to all that genetic data?</p><p>We brought back genomics professor Jenny Reardon to discuss the crushing void that was 23andMe’s business model — and that many companies like it have failed before.</p><p><strong>This is a Computer Says Maybe Short, where we bring in an expert to give their take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email </strong><a href="mailto:pod@saysmaybe.com"><strong>pod@saysmaybe.com</strong></a></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://press.uchicago.edu/ucp/books/book/chicago/P/bo22726485.html">The Postgenomic Condition</a> by Jenny Reardon</li><li><a href="https://saysmaybe.com/podcast/power-over-precision-w-jenny-reardon">Power Over Precision</a> — Jenny’s first episode with us</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Jenny Reardon is a Professor of Sociology and the Founding Director of the Science and Justice Research Center at the University of California, Santa Cruz.  Her research draws into focus questions about identity, justice and democracy that are often silently embedded in scientific ideas and practices.  She is the author of </em><a href="https://press.princeton.edu/books/paperback/9780691118574/race-to-the-finish"><em>Race to the Finish: Identity and Governance in an Age of Genomics</em></a><em> (Princeton University Press) and, most recently, </em><a href="http://press.uchicago.edu/ucp/books/book/chicago/P/bo22726485.html"><em>The Postgenomic Condition: Ethics, Justice, Knowledge After the Genome</em></a><em> (University of Chicago Press)</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/16b91431/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Terra Nullius: Who Owns the Skies? w/ Julia Powles</title>
      <itunes:episode>51</itunes:episode>
      <podcast:episode>51</podcast:episode>
      <itunes:title>Terra Nullius: Who Owns the Skies? w/ Julia Powles</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">74ef9a29-7a36-4abd-be17-5cfabcb6b80a</guid>
      <link>https://share.transistor.fm/s/c11bb3ce</link>
      <description>
        <![CDATA[<p>This is our second Terra Nullius episode. As a reminder this means ‘Nobody’s Land’ — an infamous legal fiction from the age of Empire. In this episode we ask: who owns the skies?</p><p>We get into it with law professor Julia Powles, who shares her research and perspective on the accelerating prospect of drone delivery companies taking over the skies. What? Yeah we had the same reaction. In the future a drone could deliver your morning coffee to you in minutes, neighbors be damned. As ever, tech bros are solving the serious problems, with obvious consequences of clogged skies, loud drone traffic overhead, and every coffee shop repurposed as a ghost kitchen.</p><p>What happens when companies get investment to build a product that no one asked for, but burdens everyone? How do you ‘zone’ the vastness of the skies? What are the environmental and public health impacts of yet more just-in-time delivery of things no one needs? And what are the tactics that companies use — e.g. characterising all consumers as insatiable addicts for convenience — to sell their paradigm-shifting technologies?</p><p>We want to do a third episode for Terra Nullius series on the sea. If you have anyone to recommend (perhaps yourself!) who knows anything about the world of <strong>subsea cables</strong> please email <a href="mailto:pod@saysmaybe.com">pod@saysmaybe.com</a></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://theconversation.com/when-it-comes-to-delivery-drones-the-government-is-selling-us-a-pipe-dream-experts-explain-the-real-costs-195361">When it Comes to Delivery Drones, the Government is Selling us a Pipe Dream</a> — from The Conversation</li><li><a href="https://royalsocietypublishing.org/doi/full/10.1098/rsta.2024.0107">Resisting technological inevitability: Google Wing’s delivery drones and the fight for our skies</a> — by Anna Zenz and Julia Powles</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is our second Terra Nullius episode. As a reminder this means ‘Nobody’s Land’ — an infamous legal fiction from the age of Empire. In this episode we ask: who owns the skies?</p><p>We get into it with law professor Julia Powles, who shares her research and perspective on the accelerating prospect of drone delivery companies taking over the skies. What? Yeah we had the same reaction. In the future a drone could deliver your morning coffee to you in minutes, neighbors be damned. As ever, tech bros are solving the serious problems, with obvious consequences of clogged skies, loud drone traffic overhead, and every coffee shop repurposed as a ghost kitchen.</p><p>What happens when companies get investment to build a product that no one asked for, but burdens everyone? How do you ‘zone’ the vastness of the skies? What are the environmental and public health impacts of yet more just-in-time delivery of things no one needs? And what are the tactics that companies use — e.g. characterising all consumers as insatiable addicts for convenience — to sell their paradigm-shifting technologies?</p><p>We want to do a third episode for Terra Nullius series on the sea. If you have anyone to recommend (perhaps yourself!) who knows anything about the world of <strong>subsea cables</strong> please email <a href="mailto:pod@saysmaybe.com">pod@saysmaybe.com</a></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://theconversation.com/when-it-comes-to-delivery-drones-the-government-is-selling-us-a-pipe-dream-experts-explain-the-real-costs-195361">When it Comes to Delivery Drones, the Government is Selling us a Pipe Dream</a> — from The Conversation</li><li><a href="https://royalsocietypublishing.org/doi/full/10.1098/rsta.2024.0107">Resisting technological inevitability: Google Wing’s delivery drones and the fight for our skies</a> — by Anna Zenz and Julia Powles</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p>]]>
      </content:encoded>
      <pubDate>Fri, 09 May 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/c11bb3ce/f2962d94.mp3" length="76089085" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3168</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is our second Terra Nullius episode. As a reminder this means ‘Nobody’s Land’ — an infamous legal fiction from the age of Empire. In this episode we ask: who owns the skies?</p><p>We get into it with law professor Julia Powles, who shares her research and perspective on the accelerating prospect of drone delivery companies taking over the skies. What? Yeah we had the same reaction. In the future a drone could deliver your morning coffee to you in minutes, neighbors be damned. As ever, tech bros are solving the serious problems, with obvious consequences of clogged skies, loud drone traffic overhead, and every coffee shop repurposed as a ghost kitchen.</p><p>What happens when companies get investment to build a product that no one asked for, but burdens everyone? How do you ‘zone’ the vastness of the skies? What are the environmental and public health impacts of yet more just-in-time delivery of things no one needs? And what are the tactics that companies use — e.g. characterising all consumers as insatiable addicts for convenience — to sell their paradigm-shifting technologies?</p><p>We want to do a third episode for Terra Nullius series on the sea. If you have anyone to recommend (perhaps yourself!) who knows anything about the world of <strong>subsea cables</strong> please email <a href="mailto:pod@saysmaybe.com">pod@saysmaybe.com</a></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://theconversation.com/when-it-comes-to-delivery-drones-the-government-is-selling-us-a-pipe-dream-experts-explain-the-real-costs-195361">When it Comes to Delivery Drones, the Government is Selling us a Pipe Dream</a> — from The Conversation</li><li><a href="https://royalsocietypublishing.org/doi/full/10.1098/rsta.2024.0107">Resisting technological inevitability: Google Wing’s delivery drones and the fight for our skies</a> — by Anna Zenz and Julia Powles</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/c11bb3ce/transcript.vtt" type="text/vtt" rel="captions"/>
    </item>
    <item>
      <title>Terra Nullius: Who Owns Outer Space? w/ Heather Allansdottir</title>
      <itunes:episode>50</itunes:episode>
      <podcast:episode>50</podcast:episode>
      <itunes:title>Terra Nullius: Who Owns Outer Space? w/ Heather Allansdottir</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">08d789e8-640e-404f-a963-eeed241d6fc6</guid>
      <link>https://share.transistor.fm/s/033693b9</link>
      <description>
        <![CDATA[<p>This is our first in a series called Terra Nullius. Huh? It’s Latin for ‘Nobody’s Land’. We will be exploring how rules are made for contested territory. If a land belongs to no one, does that mean it’s just up for grabs?</p><p>This week we’re starting with outer space, speaking with an expert in space law, Heather Allansdottir. But why should we care about space when the planet we are standing on is falling to shreds?</p><p>Currently, outer space belongs to no one. We have an Outer Space Treaty which was developed during the Cold War. But the treaty isn’t durable enough for a second generation of space exploration which includes private actors, not just nation states. Powerful companies, countries and individuals are in a desperate scramble to make it theirs. According Heather, we have about a two-year window to enshrine outer space as a commons, otherwise it will fall to chaos actors and tech billionaires.</p><p>In our next Terra Nullius episode, we’ll be talking about governing the skies and the companies that think you want drone-delivered coffee to your backyard.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.linkedin.com/posts/dr-heather-katharine-allansdottir-66aa0826_registering-astrodottir-my-space-law-consultancy-activity-7233861348302995456-kjB8?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAABqYHqABpyxMKgWJJY2QgfPRYgYaDc5yAfI">Astrodottir</a> — Heather’s space law consultancy</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Dr Heather Allansdottir is an academic of international law, focused on space law. She is the founder and director of the space sustainability consultancy Astrodottir, and the co-author of the forthcoming book New Perspectives in Outer Space Law (Springer 2025). She is deputy director of LLB at Birkbeck University's Faculty of Law and a former Visiting Fellow at the Lauterpacht Centre for International Law at the University of Cambridge.</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is our first in a series called Terra Nullius. Huh? It’s Latin for ‘Nobody’s Land’. We will be exploring how rules are made for contested territory. If a land belongs to no one, does that mean it’s just up for grabs?</p><p>This week we’re starting with outer space, speaking with an expert in space law, Heather Allansdottir. But why should we care about space when the planet we are standing on is falling to shreds?</p><p>Currently, outer space belongs to no one. We have an Outer Space Treaty which was developed during the Cold War. But the treaty isn’t durable enough for a second generation of space exploration which includes private actors, not just nation states. Powerful companies, countries and individuals are in a desperate scramble to make it theirs. According Heather, we have about a two-year window to enshrine outer space as a commons, otherwise it will fall to chaos actors and tech billionaires.</p><p>In our next Terra Nullius episode, we’ll be talking about governing the skies and the companies that think you want drone-delivered coffee to your backyard.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.linkedin.com/posts/dr-heather-katharine-allansdottir-66aa0826_registering-astrodottir-my-space-law-consultancy-activity-7233861348302995456-kjB8?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAABqYHqABpyxMKgWJJY2QgfPRYgYaDc5yAfI">Astrodottir</a> — Heather’s space law consultancy</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Dr Heather Allansdottir is an academic of international law, focused on space law. She is the founder and director of the space sustainability consultancy Astrodottir, and the co-author of the forthcoming book New Perspectives in Outer Space Law (Springer 2025). She is deputy director of LLB at Birkbeck University's Faculty of Law and a former Visiting Fellow at the Lauterpacht Centre for International Law at the University of Cambridge.</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 02 May 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/033693b9/1648c825.mp3" length="70547863" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2937</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is our first in a series called Terra Nullius. Huh? It’s Latin for ‘Nobody’s Land’. We will be exploring how rules are made for contested territory. If a land belongs to no one, does that mean it’s just up for grabs?</p><p>This week we’re starting with outer space, speaking with an expert in space law, Heather Allansdottir. But why should we care about space when the planet we are standing on is falling to shreds?</p><p>Currently, outer space belongs to no one. We have an Outer Space Treaty which was developed during the Cold War. But the treaty isn’t durable enough for a second generation of space exploration which includes private actors, not just nation states. Powerful companies, countries and individuals are in a desperate scramble to make it theirs. According Heather, we have about a two-year window to enshrine outer space as a commons, otherwise it will fall to chaos actors and tech billionaires.</p><p>In our next Terra Nullius episode, we’ll be talking about governing the skies and the companies that think you want drone-delivered coffee to your backyard.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.linkedin.com/posts/dr-heather-katharine-allansdottir-66aa0826_registering-astrodottir-my-space-law-consultancy-activity-7233861348302995456-kjB8?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAABqYHqABpyxMKgWJJY2QgfPRYgYaDc5yAfI">Astrodottir</a> — Heather’s space law consultancy</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Dr Heather Allansdottir is an academic of international law, focused on space law. She is the founder and director of the space sustainability consultancy Astrodottir, and the co-author of the forthcoming book New Perspectives in Outer Space Law (Springer 2025). She is deputy director of LLB at Birkbeck University's Faculty of Law and a former Visiting Fellow at the Lauterpacht Centre for International Law at the University of Cambridge.</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/033693b9/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>How to (Actually) Keep Kids Safe Online w/ Kate Sim</title>
      <itunes:episode>49</itunes:episode>
      <podcast:episode>49</podcast:episode>
      <itunes:title>How to (Actually) Keep Kids Safe Online w/ Kate Sim</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">aebf04fd-e270-4872-aab1-14d7073530b2</guid>
      <link>https://share.transistor.fm/s/ce82cede</link>
      <description>
        <![CDATA[<p>Child safety is a fuzzy catch-all concept for our broader social anxieties that seems to be everywhere in our conversations about the internet. But child safety isn’t a new concept, and the way our politics focuses on the spectacle isn’t new either.</p><p>To help us unpack this is Kate Sim, who has over a decade of experience in sexual violence prevention and response and is currently the Director of the Children’s Online Safety and Privacy Research (COSPR) program at the University of Western Australia’s Tech &amp; Policy Lab. We discuss the growth of ‘child safety’ regulation around the world, and how it often conflates multiple topics: age-gating adult content, explicit attempts to harm children, national security, and even ‘family values’.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>On COSPRs forthcoming paper on the CSAM detection ecosystem. Here is a fact sheet with ecosystem map based on it: <a href="https://bit.ly/cospr-collateral">https://bit.ly/cospr-collateral</a></li><li>On CSAM bottleneck problem: <a href="https://doi.org/10.25740/pr592kc5483">https://doi.org/10.25740/pr592kc5483</a></li><li>IBCK episode on the Anxious Generation: <a href="https://pod.link/1651876897/episode/47a8aa95c83be96b044dcb3f4e43d158">https://pod.link/1651876897/episode/47a8aa95c83be96b044dcb3f4e43d158</a></li><li>Child psychology expert Candace Odgers debunking Jonathan Haidt’s claims in real-time here: <a href="https://tyde.virginia.edu/event/haidt-odgers/">https://tyde.virginia.edu/event/haidt-odgers/</a>)</li><li>A primer on client-side scanning and CSAM from Mitali Thakor: <a href="https://mit-serc.pubpub.org/pub/701yvdbh/release/2">https://mit-serc.pubpub.org/pub/701yvdbh/release/2</a></li><li>On effective CSA prevention and scalability: <a href="https://www.prevention.global/resources/read-full-scalability-report">https://www.prevention.global/resources/read-full-scalability-report</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Dr. Kate Sim is the Director of the COSPR Program. She has over 14 years of experience in sexual violence prevention and response, having worked across community organizing, frontline support, government, academia, and industry in the US, UK, and South Korea. Her current research interests are: Big Tech accountability, sexual violence, and children’s liberation. Most recently, she worked at Google where she shaped product policy on a range of children's safety issues, including non-consensual intimate imagery, financial sextortion, grooming, and help-seeking journeys for people impacted by harmful sexual behaviors. Kate holds a PhD and MSc from the Oxford Internet Institute and a BA in Gender and Sexuality Studies from Harvard University.</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Child safety is a fuzzy catch-all concept for our broader social anxieties that seems to be everywhere in our conversations about the internet. But child safety isn’t a new concept, and the way our politics focuses on the spectacle isn’t new either.</p><p>To help us unpack this is Kate Sim, who has over a decade of experience in sexual violence prevention and response and is currently the Director of the Children’s Online Safety and Privacy Research (COSPR) program at the University of Western Australia’s Tech &amp; Policy Lab. We discuss the growth of ‘child safety’ regulation around the world, and how it often conflates multiple topics: age-gating adult content, explicit attempts to harm children, national security, and even ‘family values’.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>On COSPRs forthcoming paper on the CSAM detection ecosystem. Here is a fact sheet with ecosystem map based on it: <a href="https://bit.ly/cospr-collateral">https://bit.ly/cospr-collateral</a></li><li>On CSAM bottleneck problem: <a href="https://doi.org/10.25740/pr592kc5483">https://doi.org/10.25740/pr592kc5483</a></li><li>IBCK episode on the Anxious Generation: <a href="https://pod.link/1651876897/episode/47a8aa95c83be96b044dcb3f4e43d158">https://pod.link/1651876897/episode/47a8aa95c83be96b044dcb3f4e43d158</a></li><li>Child psychology expert Candace Odgers debunking Jonathan Haidt’s claims in real-time here: <a href="https://tyde.virginia.edu/event/haidt-odgers/">https://tyde.virginia.edu/event/haidt-odgers/</a>)</li><li>A primer on client-side scanning and CSAM from Mitali Thakor: <a href="https://mit-serc.pubpub.org/pub/701yvdbh/release/2">https://mit-serc.pubpub.org/pub/701yvdbh/release/2</a></li><li>On effective CSA prevention and scalability: <a href="https://www.prevention.global/resources/read-full-scalability-report">https://www.prevention.global/resources/read-full-scalability-report</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Dr. Kate Sim is the Director of the COSPR Program. She has over 14 years of experience in sexual violence prevention and response, having worked across community organizing, frontline support, government, academia, and industry in the US, UK, and South Korea. Her current research interests are: Big Tech accountability, sexual violence, and children’s liberation. Most recently, she worked at Google where she shaped product policy on a range of children's safety issues, including non-consensual intimate imagery, financial sextortion, grooming, and help-seeking journeys for people impacted by harmful sexual behaviors. Kate holds a PhD and MSc from the Oxford Internet Institute and a BA in Gender and Sexuality Studies from Harvard University.</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 25 Apr 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/ce82cede/3d9e23d7.mp3" length="69859903" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2909</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Child safety is a fuzzy catch-all concept for our broader social anxieties that seems to be everywhere in our conversations about the internet. But child safety isn’t a new concept, and the way our politics focuses on the spectacle isn’t new either.</p><p>To help us unpack this is Kate Sim, who has over a decade of experience in sexual violence prevention and response and is currently the Director of the Children’s Online Safety and Privacy Research (COSPR) program at the University of Western Australia’s Tech &amp; Policy Lab. We discuss the growth of ‘child safety’ regulation around the world, and how it often conflates multiple topics: age-gating adult content, explicit attempts to harm children, national security, and even ‘family values’.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li>On COSPRs forthcoming paper on the CSAM detection ecosystem. Here is a fact sheet with ecosystem map based on it: <a href="https://bit.ly/cospr-collateral">https://bit.ly/cospr-collateral</a></li><li>On CSAM bottleneck problem: <a href="https://doi.org/10.25740/pr592kc5483">https://doi.org/10.25740/pr592kc5483</a></li><li>IBCK episode on the Anxious Generation: <a href="https://pod.link/1651876897/episode/47a8aa95c83be96b044dcb3f4e43d158">https://pod.link/1651876897/episode/47a8aa95c83be96b044dcb3f4e43d158</a></li><li>Child psychology expert Candace Odgers debunking Jonathan Haidt’s claims in real-time here: <a href="https://tyde.virginia.edu/event/haidt-odgers/">https://tyde.virginia.edu/event/haidt-odgers/</a>)</li><li>A primer on client-side scanning and CSAM from Mitali Thakor: <a href="https://mit-serc.pubpub.org/pub/701yvdbh/release/2">https://mit-serc.pubpub.org/pub/701yvdbh/release/2</a></li><li>On effective CSA prevention and scalability: <a href="https://www.prevention.global/resources/read-full-scalability-report">https://www.prevention.global/resources/read-full-scalability-report</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Dr. Kate Sim is the Director of the COSPR Program. She has over 14 years of experience in sexual violence prevention and response, having worked across community organizing, frontline support, government, academia, and industry in the US, UK, and South Korea. Her current research interests are: Big Tech accountability, sexual violence, and children’s liberation. Most recently, she worked at Google where she shaped product policy on a range of children's safety issues, including non-consensual intimate imagery, financial sextortion, grooming, and help-seeking journeys for people impacted by harmful sexual behaviors. Kate holds a PhD and MSc from the Oxford Internet Institute and a BA in Gender and Sexuality Studies from Harvard University.</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Worker Power &amp; Big Tech Bossmen w/ David Seligman</title>
      <itunes:episode>48</itunes:episode>
      <podcast:episode>48</podcast:episode>
      <itunes:title>Worker Power &amp; Big Tech Bossmen w/ David Seligman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f6acdf88-8a44-4c81-82ed-e53151a6eb0a</guid>
      <link>https://share.transistor.fm/s/24eebe5e</link>
      <description>
        <![CDATA[<p>This week Alix interviewed David Seligman, Executive Director of Towards Justice, to tell us more about how big tech companies act brazenly as legal bullies to extract wealth and power from the working class in the US. He makes a compelling case for the urgent need to re-orient our thinking about political power and organise against it.</p><p>We talk about legal devices like forced arbitration and monopolistic practices like algorithmic price fixing and wage suppression. And we dig into the existential issue of tech companies asserting more and more control over markets and people without taking any responsibility for the dominating role they play.</p><p><strong>Further reading &amp; resources</strong></p><ul><li>Towards Justice California drivers suit</li><li><a href="https://en.wikipedia.org/wiki/Eichmann_in_Jerusalem">Eichman in Jerusalem: A Report on the Banal State of Evil</a> by Hannah Arendt</li><li><a href="https://global.oup.com/academic/product/the-dual-state-9780198716204?cc=gb&amp;lang=en&amp;">The Dual State</a> by Ernst Fraenkel</li><li><a href="https://towardsjustice.org/wp-content/uploads/2025/02/Real-Surveillance-Prices-and-Wages-Report.pdf">Prohibiting Surveillance Prices and Wages</a> by Towards Justice</li><li><a href="https://www.classaction.org/media/gill-et-al-v-uber-technologies-inc-et-al.pdf">Gill VS Uber</a> — class action led by Towards Justice</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week Alix interviewed David Seligman, Executive Director of Towards Justice, to tell us more about how big tech companies act brazenly as legal bullies to extract wealth and power from the working class in the US. He makes a compelling case for the urgent need to re-orient our thinking about political power and organise against it.</p><p>We talk about legal devices like forced arbitration and monopolistic practices like algorithmic price fixing and wage suppression. And we dig into the existential issue of tech companies asserting more and more control over markets and people without taking any responsibility for the dominating role they play.</p><p><strong>Further reading &amp; resources</strong></p><ul><li>Towards Justice California drivers suit</li><li><a href="https://en.wikipedia.org/wiki/Eichmann_in_Jerusalem">Eichman in Jerusalem: A Report on the Banal State of Evil</a> by Hannah Arendt</li><li><a href="https://global.oup.com/academic/product/the-dual-state-9780198716204?cc=gb&amp;lang=en&amp;">The Dual State</a> by Ernst Fraenkel</li><li><a href="https://towardsjustice.org/wp-content/uploads/2025/02/Real-Surveillance-Prices-and-Wages-Report.pdf">Prohibiting Surveillance Prices and Wages</a> by Towards Justice</li><li><a href="https://www.classaction.org/media/gill-et-al-v-uber-technologies-inc-et-al.pdf">Gill VS Uber</a> — class action led by Towards Justice</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p><br></p>]]>
      </content:encoded>
      <pubDate>Fri, 18 Apr 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/24eebe5e/9da735e0.mp3" length="66494661" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2768</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week Alix interviewed David Seligman, Executive Director of Towards Justice, to tell us more about how big tech companies act brazenly as legal bullies to extract wealth and power from the working class in the US. He makes a compelling case for the urgent need to re-orient our thinking about political power and organise against it.</p><p>We talk about legal devices like forced arbitration and monopolistic practices like algorithmic price fixing and wage suppression. And we dig into the existential issue of tech companies asserting more and more control over markets and people without taking any responsibility for the dominating role they play.</p><p><strong>Further reading &amp; resources</strong></p><ul><li>Towards Justice California drivers suit</li><li><a href="https://en.wikipedia.org/wiki/Eichmann_in_Jerusalem">Eichman in Jerusalem: A Report on the Banal State of Evil</a> by Hannah Arendt</li><li><a href="https://global.oup.com/academic/product/the-dual-state-9780198716204?cc=gb&amp;lang=en&amp;">The Dual State</a> by Ernst Fraenkel</li><li><a href="https://towardsjustice.org/wp-content/uploads/2025/02/Real-Surveillance-Prices-and-Wages-Report.pdf">Prohibiting Surveillance Prices and Wages</a> by Towards Justice</li><li><a href="https://www.classaction.org/media/gill-et-al-v-uber-technologies-inc-et-al.pdf">Gill VS Uber</a> — class action led by Towards Justice</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>**Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</strong></p><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>AI Can’t Fix This: Live in London</title>
      <itunes:episode>47</itunes:episode>
      <podcast:episode>47</podcast:episode>
      <itunes:title>AI Can’t Fix This: Live in London</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2c6f7d00-6bef-4e80-b749-772558d8fd9a</guid>
      <link>https://share.transistor.fm/s/bb232712</link>
      <description>
        <![CDATA[<p>Last week Alix was in London to talk UK politics and broligarchy with four amazing guests:</p><ul><li>Martha Dark from Foxglove gave us the history and implications of the NHS/Palantir partnership of horror</li><li>Matt Mahmoudi outlined the UK’s push to amp up facial recognition surveillance and to outlaw protests (seems good)</li><li>Seyi Akiwowo shared a retrospective of the development of the Online Safety Act — the UK’s online speech regulation meant to protect kids</li><li>Tanya O’Carroll did a victory lap, sharing details of her case against Facebook’s intrusive ad-targeting business model</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> for up-to-the-month opportunities to get involved!**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Last week Alix was in London to talk UK politics and broligarchy with four amazing guests:</p><ul><li>Martha Dark from Foxglove gave us the history and implications of the NHS/Palantir partnership of horror</li><li>Matt Mahmoudi outlined the UK’s push to amp up facial recognition surveillance and to outlaw protests (seems good)</li><li>Seyi Akiwowo shared a retrospective of the development of the Online Safety Act — the UK’s online speech regulation meant to protect kids</li><li>Tanya O’Carroll did a victory lap, sharing details of her case against Facebook’s intrusive ad-targeting business model</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> for up-to-the-month opportunities to get involved!**</p>]]>
      </content:encoded>
      <pubDate>Fri, 11 Apr 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/bb232712/a3ae4031.mp3" length="109435331" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>4558</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Last week Alix was in London to talk UK politics and broligarchy with four amazing guests:</p><ul><li>Martha Dark from Foxglove gave us the history and implications of the NHS/Palantir partnership of horror</li><li>Matt Mahmoudi outlined the UK’s push to amp up facial recognition surveillance and to outlaw protests (seems good)</li><li>Seyi Akiwowo shared a retrospective of the development of the Online Safety Act — the UK’s online speech regulation meant to protect kids</li><li>Tanya O’Carroll did a victory lap, sharing details of her case against Facebook’s intrusive ad-targeting business model</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> for up-to-the-month opportunities to get involved!**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Technology Nationalism in India w/ Divij Joshi</title>
      <itunes:episode>46</itunes:episode>
      <podcast:episode>46</podcast:episode>
      <itunes:title>Technology Nationalism in India w/ Divij Joshi</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1ab6167d-76d0-48ee-97dd-2bc8c68e71fd</guid>
      <link>https://share.transistor.fm/s/9b5dfdd0</link>
      <description>
        <![CDATA[<p>Amidst the scrambling of geopolitics, there is increasing conversation and momentum for the concept of tech sovereignty. It basically means that countries should build their own technology rather than rely on Silicon Valley. India Stack! Euro Stack! Everyone wants a stack.</p><p>In this episode we explore India’s work over the last 20 years to build ‘digital public infrastructure’ or DPI. They went YOLO on a digital ID system in a country of 1 billion people — with very mixed results. Did this ‘public infrastructure’ lead to a locally-owned marketplaces? Nope! Has the fact that their PM is a Hindu nationalist limited India’s ability to tout this work on the global stage? Also nope! It’s actually allowed the government to techwash its authoritarianism.</p><p>Lots to unpack here, and fortunately, we’re joined by Divij Joshi, a researcher focused on the political economy of ‘digital public infrastructure’ or DPI, to explore India’s attempts at digital ID and government-as-a-platform.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://watermark.silverchair.com/inov_a_00056.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAyEwggMdBgkqhkiG9w0BBwagggMOMIIDCgIBADCCAwMGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMkE8ueUIlKuyni0QGAgEQgIIC1LfzOmL_2WsLTmJbF1gqJK4qx10wJiIyNxGYAuTBoGKz7VrcPNJXjmKQCioObY88jAHCQN7QNe--3Je5tUrZobZ4UYYsvXbGBT6sDLXCqk_f5Z-GsfgdZ9_ZXlEioXq_4bAImB2jM1BTyl8s8mBuEEYgdOtiHB-AqmkeEGVjRNDrbWjUojGCO8wa5M1PgWwH3OnRh092cN0WniBDRxThKsLDOUCOzG748G-7a01kfiEr2NvqdYNk4nI6y5BbLGWL1SqC3X6_IvmG-c2v99p6zCQSNNnXfXeDpS_9cNbkDE-fCwxFNq-41bzVzMVRQCs1GdC6iwlJ8yxP2DaGpzJJzraX_KDN2XGMiFmRlCj44ITNFpvAloX9CQ2JUs2hZ8VDgKQbg_XGDjEDqnUPDgoPlGIXpEi7rFnYj9cVsA91b2er2dG-NXEBZUBKuSmc_8duapryF5VlhB35MXGCvLSpiudeiBH-PLGEAbM64d6_BmaWw9wnkfzz_NflJbysCkX5kCoRTZ24GUDA89cxn3I0lJ1PldKHpS9Ex-Yk-RUDT1mDeJFfPfW4MGgHWYxKiWSwsQYf8tN2O2ElC69CDKZvb9hMGL0AYlQPMhbURRvJhnsKdsUB8B2Gwfdn1O2sQtC16JL9lwOgMiFMyNL1uKDDf78_hW_HrYyR-JnGr58f8wfFcCNNAYxPshnvfuO6G7YTIO6JvJQbILmcWzZSOysIUydu7_RQXGgNK9hsnwtpuKdib2lgNavrnyk5igaAuUBi3jGh_vhWcZuPrztf7lxsd2TQc5993rQdXcE6SWFnHXyi2KaJeclm2XZFf9ULTpGTGFnujCl8Fg_W8L_M9sapBSVJ6cNPJVL-AUHp8laj12v7wDNKwBwby-VaPgUFhpk2JBe_oFPSlSps6FQAwoSHerK02MtnbkcP24voYjTdmmHKfAPAMdHW3bmhGs2jOdSDe2hcVwg">Government as a Platform</a> by Tim O’Reilly</li><li><a href="https://www.notion.so/The-Global-DPI-Agenda-Promises-vs-Realities-in-the-Evolution-of-Digital-Public-Infrastructure-DPI-191557d32fa08072a585d79ea5d9da22?pvs=21">The Global DPI Agenda</a></li><li><a href="https://itforchange.net/sites/default/files/2025-02/Recovering_the_%E2%80%98public%E2%80%99_in_India%E2%80%99s_Digital_Public_Infrastructure_0.pdf">Recovering the ‘Public’ in India’s Digital Public Infrastructure Strategy</a> by IT for Change</li><li><a href="https://caravanmagazine.in/reportage/aadhaar-mixing-public-risk-private-profit">Aadhaar’s mixing of public risk and private profit</a> by Aria Thaker</li><li><a href="https://www.india-seminar.com/2020/731/731_divij_joshi.htm">Interrogating India’s quest for data sovereignty</a> by Divij Joshi</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Divij is a Research Fellow at ODI Global and a Doctoral Researcher at UCL, where his research and advocacy focuses on understanding the political economy and governance of emerging technologies to articulate a vision for a fair and just information society. His thesis examines how the emergence of 'Digital Public Infrastructures', as platform and data-based information systems are shaping notions of economic development and political subjectivity in India and globally.</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Amidst the scrambling of geopolitics, there is increasing conversation and momentum for the concept of tech sovereignty. It basically means that countries should build their own technology rather than rely on Silicon Valley. India Stack! Euro Stack! Everyone wants a stack.</p><p>In this episode we explore India’s work over the last 20 years to build ‘digital public infrastructure’ or DPI. They went YOLO on a digital ID system in a country of 1 billion people — with very mixed results. Did this ‘public infrastructure’ lead to a locally-owned marketplaces? Nope! Has the fact that their PM is a Hindu nationalist limited India’s ability to tout this work on the global stage? Also nope! It’s actually allowed the government to techwash its authoritarianism.</p><p>Lots to unpack here, and fortunately, we’re joined by Divij Joshi, a researcher focused on the political economy of ‘digital public infrastructure’ or DPI, to explore India’s attempts at digital ID and government-as-a-platform.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://watermark.silverchair.com/inov_a_00056.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAyEwggMdBgkqhkiG9w0BBwagggMOMIIDCgIBADCCAwMGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMkE8ueUIlKuyni0QGAgEQgIIC1LfzOmL_2WsLTmJbF1gqJK4qx10wJiIyNxGYAuTBoGKz7VrcPNJXjmKQCioObY88jAHCQN7QNe--3Je5tUrZobZ4UYYsvXbGBT6sDLXCqk_f5Z-GsfgdZ9_ZXlEioXq_4bAImB2jM1BTyl8s8mBuEEYgdOtiHB-AqmkeEGVjRNDrbWjUojGCO8wa5M1PgWwH3OnRh092cN0WniBDRxThKsLDOUCOzG748G-7a01kfiEr2NvqdYNk4nI6y5BbLGWL1SqC3X6_IvmG-c2v99p6zCQSNNnXfXeDpS_9cNbkDE-fCwxFNq-41bzVzMVRQCs1GdC6iwlJ8yxP2DaGpzJJzraX_KDN2XGMiFmRlCj44ITNFpvAloX9CQ2JUs2hZ8VDgKQbg_XGDjEDqnUPDgoPlGIXpEi7rFnYj9cVsA91b2er2dG-NXEBZUBKuSmc_8duapryF5VlhB35MXGCvLSpiudeiBH-PLGEAbM64d6_BmaWw9wnkfzz_NflJbysCkX5kCoRTZ24GUDA89cxn3I0lJ1PldKHpS9Ex-Yk-RUDT1mDeJFfPfW4MGgHWYxKiWSwsQYf8tN2O2ElC69CDKZvb9hMGL0AYlQPMhbURRvJhnsKdsUB8B2Gwfdn1O2sQtC16JL9lwOgMiFMyNL1uKDDf78_hW_HrYyR-JnGr58f8wfFcCNNAYxPshnvfuO6G7YTIO6JvJQbILmcWzZSOysIUydu7_RQXGgNK9hsnwtpuKdib2lgNavrnyk5igaAuUBi3jGh_vhWcZuPrztf7lxsd2TQc5993rQdXcE6SWFnHXyi2KaJeclm2XZFf9ULTpGTGFnujCl8Fg_W8L_M9sapBSVJ6cNPJVL-AUHp8laj12v7wDNKwBwby-VaPgUFhpk2JBe_oFPSlSps6FQAwoSHerK02MtnbkcP24voYjTdmmHKfAPAMdHW3bmhGs2jOdSDe2hcVwg">Government as a Platform</a> by Tim O’Reilly</li><li><a href="https://www.notion.so/The-Global-DPI-Agenda-Promises-vs-Realities-in-the-Evolution-of-Digital-Public-Infrastructure-DPI-191557d32fa08072a585d79ea5d9da22?pvs=21">The Global DPI Agenda</a></li><li><a href="https://itforchange.net/sites/default/files/2025-02/Recovering_the_%E2%80%98public%E2%80%99_in_India%E2%80%99s_Digital_Public_Infrastructure_0.pdf">Recovering the ‘Public’ in India’s Digital Public Infrastructure Strategy</a> by IT for Change</li><li><a href="https://caravanmagazine.in/reportage/aadhaar-mixing-public-risk-private-profit">Aadhaar’s mixing of public risk and private profit</a> by Aria Thaker</li><li><a href="https://www.india-seminar.com/2020/731/731_divij_joshi.htm">Interrogating India’s quest for data sovereignty</a> by Divij Joshi</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Divij is a Research Fellow at ODI Global and a Doctoral Researcher at UCL, where his research and advocacy focuses on understanding the political economy and governance of emerging technologies to articulate a vision for a fair and just information society. His thesis examines how the emergence of 'Digital Public Infrastructures', as platform and data-based information systems are shaping notions of economic development and political subjectivity in India and globally.</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 04 Apr 2025 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/9b5dfdd0/3c077ccf.mp3" length="72620263" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3024</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Amidst the scrambling of geopolitics, there is increasing conversation and momentum for the concept of tech sovereignty. It basically means that countries should build their own technology rather than rely on Silicon Valley. India Stack! Euro Stack! Everyone wants a stack.</p><p>In this episode we explore India’s work over the last 20 years to build ‘digital public infrastructure’ or DPI. They went YOLO on a digital ID system in a country of 1 billion people — with very mixed results. Did this ‘public infrastructure’ lead to a locally-owned marketplaces? Nope! Has the fact that their PM is a Hindu nationalist limited India’s ability to tout this work on the global stage? Also nope! It’s actually allowed the government to techwash its authoritarianism.</p><p>Lots to unpack here, and fortunately, we’re joined by Divij Joshi, a researcher focused on the political economy of ‘digital public infrastructure’ or DPI, to explore India’s attempts at digital ID and government-as-a-platform.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://watermark.silverchair.com/inov_a_00056.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAyEwggMdBgkqhkiG9w0BBwagggMOMIIDCgIBADCCAwMGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMkE8ueUIlKuyni0QGAgEQgIIC1LfzOmL_2WsLTmJbF1gqJK4qx10wJiIyNxGYAuTBoGKz7VrcPNJXjmKQCioObY88jAHCQN7QNe--3Je5tUrZobZ4UYYsvXbGBT6sDLXCqk_f5Z-GsfgdZ9_ZXlEioXq_4bAImB2jM1BTyl8s8mBuEEYgdOtiHB-AqmkeEGVjRNDrbWjUojGCO8wa5M1PgWwH3OnRh092cN0WniBDRxThKsLDOUCOzG748G-7a01kfiEr2NvqdYNk4nI6y5BbLGWL1SqC3X6_IvmG-c2v99p6zCQSNNnXfXeDpS_9cNbkDE-fCwxFNq-41bzVzMVRQCs1GdC6iwlJ8yxP2DaGpzJJzraX_KDN2XGMiFmRlCj44ITNFpvAloX9CQ2JUs2hZ8VDgKQbg_XGDjEDqnUPDgoPlGIXpEi7rFnYj9cVsA91b2er2dG-NXEBZUBKuSmc_8duapryF5VlhB35MXGCvLSpiudeiBH-PLGEAbM64d6_BmaWw9wnkfzz_NflJbysCkX5kCoRTZ24GUDA89cxn3I0lJ1PldKHpS9Ex-Yk-RUDT1mDeJFfPfW4MGgHWYxKiWSwsQYf8tN2O2ElC69CDKZvb9hMGL0AYlQPMhbURRvJhnsKdsUB8B2Gwfdn1O2sQtC16JL9lwOgMiFMyNL1uKDDf78_hW_HrYyR-JnGr58f8wfFcCNNAYxPshnvfuO6G7YTIO6JvJQbILmcWzZSOysIUydu7_RQXGgNK9hsnwtpuKdib2lgNavrnyk5igaAuUBi3jGh_vhWcZuPrztf7lxsd2TQc5993rQdXcE6SWFnHXyi2KaJeclm2XZFf9ULTpGTGFnujCl8Fg_W8L_M9sapBSVJ6cNPJVL-AUHp8laj12v7wDNKwBwby-VaPgUFhpk2JBe_oFPSlSps6FQAwoSHerK02MtnbkcP24voYjTdmmHKfAPAMdHW3bmhGs2jOdSDe2hcVwg">Government as a Platform</a> by Tim O’Reilly</li><li><a href="https://www.notion.so/The-Global-DPI-Agenda-Promises-vs-Realities-in-the-Evolution-of-Digital-Public-Infrastructure-DPI-191557d32fa08072a585d79ea5d9da22?pvs=21">The Global DPI Agenda</a></li><li><a href="https://itforchange.net/sites/default/files/2025-02/Recovering_the_%E2%80%98public%E2%80%99_in_India%E2%80%99s_Digital_Public_Infrastructure_0.pdf">Recovering the ‘Public’ in India’s Digital Public Infrastructure Strategy</a> by IT for Change</li><li><a href="https://caravanmagazine.in/reportage/aadhaar-mixing-public-risk-private-profit">Aadhaar’s mixing of public risk and private profit</a> by Aria Thaker</li><li><a href="https://www.india-seminar.com/2020/731/731_divij_joshi.htm">Interrogating India’s quest for data sovereignty</a> by Divij Joshi</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Divij is a Research Fellow at ODI Global and a Doctoral Researcher at UCL, where his research and advocacy focuses on understanding the political economy and governance of emerging technologies to articulate a vision for a fair and just information society. His thesis examines how the emergence of 'Digital Public Infrastructures', as platform and data-based information systems are shaping notions of economic development and political subjectivity in India and globally.</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>AI Assistant or AI Boss? w/ Data &amp; Society</title>
      <itunes:episode>45</itunes:episode>
      <podcast:episode>45</podcast:episode>
      <itunes:title>AI Assistant or AI Boss? w/ Data &amp; Society</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">869afee5-7751-4ee6-adbd-ee8250765401</guid>
      <link>https://share.transistor.fm/s/65ead492</link>
      <description>
        <![CDATA[<p>Two years ago, we were told that ‘prompt engineer’ would be a real job — well, it’s not. Is generative AI actually going to replace and transform human labour, or is this just another shallow marketing narrative?</p><p>This week Alix speaks with Aiha Nguyen and Alexandra Mateescu, who recently authored <a href="https://datasociety.net/library/generative-ai-and-labor/">Generative AI and Labor: Power, Hype, and Value at Work</a>. They discuss how automation is now being used as a threat against workers, and how certain types of labour are being devalued by AI — especially (shocking) traditionally feminised work, such as caregiving.</p><p><strong>Further reading:</strong></p><ul><li><a href="https://datasociety.net/library/generative-ai-and-labor/">Generative AI and Labor: Power, Hype, and Value at Work</a> by Aiha Nguyen and Alexandra Mateescu</li><li><a href="https://www.hachette.co.uk/titles/brian-merchant/blood-in-the-machine/9780316487740/">Blood in the Machine</a> by Brain Merchant</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</strong></p><p><a href="https://datasociety.net/people/nguyen-aiha/">*Aiha Nguyen</a> is the Program Director for the Labor Futures Initiative at Data &amp; Society where she guides research and engagement. She brings a practitioner's perspective to this role having worked for over a decade in community and worker advocacy and organizing. Her research interests lie at the intersection of labor, technology, and urban studies. She is author of The Constant Boss: Work Under Digital Surveillance and co-author of ‘At the Digital Doorstep: How Customers Use Doorbell Cameras to Manage Delivery Workers’, and ‘Generative AI and Labor: Power, Hype and Value at Work’.*</p><p><a href="https://datasociety.net/people/mateescu-alexandra/">*Alexandra Mateescu</a> is a researcher on the Labor Futures team at the Data &amp; Society Research Institute, where she investigates the impacts of digital surveillance, AI, and algorithmic power within the workplace. As an ethnographer, her past work has led her to explore the role of worker data and its commodification, the intersections of care labor and digital platforms, automation within service industries, and generative AI in creative industries. She is also a 2024-2025 Fellow at the Siegel Family Endowment.*</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Two years ago, we were told that ‘prompt engineer’ would be a real job — well, it’s not. Is generative AI actually going to replace and transform human labour, or is this just another shallow marketing narrative?</p><p>This week Alix speaks with Aiha Nguyen and Alexandra Mateescu, who recently authored <a href="https://datasociety.net/library/generative-ai-and-labor/">Generative AI and Labor: Power, Hype, and Value at Work</a>. They discuss how automation is now being used as a threat against workers, and how certain types of labour are being devalued by AI — especially (shocking) traditionally feminised work, such as caregiving.</p><p><strong>Further reading:</strong></p><ul><li><a href="https://datasociety.net/library/generative-ai-and-labor/">Generative AI and Labor: Power, Hype, and Value at Work</a> by Aiha Nguyen and Alexandra Mateescu</li><li><a href="https://www.hachette.co.uk/titles/brian-merchant/blood-in-the-machine/9780316487740/">Blood in the Machine</a> by Brain Merchant</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</strong></p><p><a href="https://datasociety.net/people/nguyen-aiha/">*Aiha Nguyen</a> is the Program Director for the Labor Futures Initiative at Data &amp; Society where she guides research and engagement. She brings a practitioner's perspective to this role having worked for over a decade in community and worker advocacy and organizing. Her research interests lie at the intersection of labor, technology, and urban studies. She is author of The Constant Boss: Work Under Digital Surveillance and co-author of ‘At the Digital Doorstep: How Customers Use Doorbell Cameras to Manage Delivery Workers’, and ‘Generative AI and Labor: Power, Hype and Value at Work’.*</p><p><a href="https://datasociety.net/people/mateescu-alexandra/">*Alexandra Mateescu</a> is a researcher on the Labor Futures team at the Data &amp; Society Research Institute, where she investigates the impacts of digital surveillance, AI, and algorithmic power within the workplace. As an ethnographer, her past work has led her to explore the role of worker data and its commodification, the intersections of care labor and digital platforms, automation within service industries, and generative AI in creative industries. She is also a 2024-2025 Fellow at the Siegel Family Endowment.*</p>]]>
      </content:encoded>
      <pubDate>Fri, 28 Mar 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/65ead492/e31e0885.mp3" length="62005171" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2581</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Two years ago, we were told that ‘prompt engineer’ would be a real job — well, it’s not. Is generative AI actually going to replace and transform human labour, or is this just another shallow marketing narrative?</p><p>This week Alix speaks with Aiha Nguyen and Alexandra Mateescu, who recently authored <a href="https://datasociety.net/library/generative-ai-and-labor/">Generative AI and Labor: Power, Hype, and Value at Work</a>. They discuss how automation is now being used as a threat against workers, and how certain types of labour are being devalued by AI — especially (shocking) traditionally feminised work, such as caregiving.</p><p><strong>Further reading:</strong></p><ul><li><a href="https://datasociety.net/library/generative-ai-and-labor/">Generative AI and Labor: Power, Hype, and Value at Work</a> by Aiha Nguyen and Alexandra Mateescu</li><li><a href="https://www.hachette.co.uk/titles/brian-merchant/blood-in-the-machine/9780316487740/">Blood in the Machine</a> by Brain Merchant</li></ul><p><a href="https://www.saysmaybe.com/newsletter"><strong>Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</strong></p><p><a href="https://datasociety.net/people/nguyen-aiha/">*Aiha Nguyen</a> is the Program Director for the Labor Futures Initiative at Data &amp; Society where she guides research and engagement. She brings a practitioner's perspective to this role having worked for over a decade in community and worker advocacy and organizing. Her research interests lie at the intersection of labor, technology, and urban studies. She is author of The Constant Boss: Work Under Digital Surveillance and co-author of ‘At the Digital Doorstep: How Customers Use Doorbell Cameras to Manage Delivery Workers’, and ‘Generative AI and Labor: Power, Hype and Value at Work’.*</p><p><a href="https://datasociety.net/people/mateescu-alexandra/">*Alexandra Mateescu</a> is a researcher on the Labor Futures team at the Data &amp; Society Research Institute, where she investigates the impacts of digital surveillance, AI, and algorithmic power within the workplace. As an ethnographer, her past work has led her to explore the role of worker data and its commodification, the intersections of care labor and digital platforms, automation within service industries, and generative AI in creative industries. She is also a 2024-2025 Fellow at the Siegel Family Endowment.*</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Regulating Privacy in an AI Era w/ Carly Kind</title>
      <itunes:episode>44</itunes:episode>
      <podcast:episode>44</podcast:episode>
      <itunes:title>Regulating Privacy in an AI Era w/ Carly Kind</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6bc0088f-c7c5-4614-bdd5-a1b2b74f9661</guid>
      <link>https://share.transistor.fm/s/5953b632</link>
      <description>
        <![CDATA[<p>This week Alix is speaking with her long-time friend and collaborator Carly Kind, who is now the privacy commissioner of Australia. Here’s something you may be embarrassed to ask: what does a privacy commissioner even do? We got you…</p><p>Alix and Carly will discuss how privacy regs bump up against current trends in AI, how to incentivise compliance, and the limits of Australian privacy laws.</p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Carly Kind commenced as Australia’s Privacy Commissioner in February 2024 for a 5-year term. As Privacy Commissioner, she regulates the handling of personal information by entities covered by the Australian Privacy Act 1988 and seeks to influence the development of legislation and advance privacy protections for Australians. Ms Kind joined from the UK-based Ada Lovelace Institute, where she was the inaugural director. As a human rights lawyer and leading authority on the intersection of technology policy and human rights, she has advised industry, government and non-profit organisations on digital rights, artificial intelligence, privacy and data protection, and corporate accountability in the technology sphere.</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week Alix is speaking with her long-time friend and collaborator Carly Kind, who is now the privacy commissioner of Australia. Here’s something you may be embarrassed to ask: what does a privacy commissioner even do? We got you…</p><p>Alix and Carly will discuss how privacy regs bump up against current trends in AI, how to incentivise compliance, and the limits of Australian privacy laws.</p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Carly Kind commenced as Australia’s Privacy Commissioner in February 2024 for a 5-year term. As Privacy Commissioner, she regulates the handling of personal information by entities covered by the Australian Privacy Act 1988 and seeks to influence the development of legislation and advance privacy protections for Australians. Ms Kind joined from the UK-based Ada Lovelace Institute, where she was the inaugural director. As a human rights lawyer and leading authority on the intersection of technology policy and human rights, she has advised industry, government and non-profit organisations on digital rights, artificial intelligence, privacy and data protection, and corporate accountability in the technology sphere.</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 21 Mar 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/5953b632/18ba6d22.mp3" length="87574598" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3647</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week Alix is speaking with her long-time friend and collaborator Carly Kind, who is now the privacy commissioner of Australia. Here’s something you may be embarrassed to ask: what does a privacy commissioner even do? We got you…</p><p>Alix and Carly will discuss how privacy regs bump up against current trends in AI, how to incentivise compliance, and the limits of Australian privacy laws.</p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Carly Kind commenced as Australia’s Privacy Commissioner in February 2024 for a 5-year term. As Privacy Commissioner, she regulates the handling of personal information by entities covered by the Australian Privacy Act 1988 and seeks to influence the development of legislation and advance privacy protections for Australians. Ms Kind joined from the UK-based Ada Lovelace Institute, where she was the inaugural director. As a human rights lawyer and leading authority on the intersection of technology policy and human rights, she has advised industry, government and non-profit organisations on digital rights, artificial intelligence, privacy and data protection, and corporate accountability in the technology sphere.</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Dogwhistles: Networked Transphobia Online</title>
      <itunes:episode>43</itunes:episode>
      <podcast:episode>43</podcast:episode>
      <itunes:title>Dogwhistles: Networked Transphobia Online</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5364ca98-34ff-4660-9f4c-d660e12f35fd</guid>
      <link>https://share.transistor.fm/s/8e82a726</link>
      <description>
        <![CDATA[<p>This week producer Georgia joins Alix to discuss something huge that we’ve yet to go deep on: the prevalence of trans misogyny online. This episode is jam-packed with four amazing guests to guide us through this rough terrain:</p><ul><li><a href="https://shivanidave.co.uk/">Shivani Dave</a> is a journalist and commentator who uses social media for their career and income. They share their experiences with receiving hate online, and having to balance posting against hits to their mental health</li><li><a href="https://www.linkedin.com/in/alicehunsberger/">Alice Hunsberger</a> is a trust &amp; safety professional who’s worked at all levels of content moderation. She explains the technical complexities and limitations of moderating online spaces</li><li><a href="https://www.linkedin.com/in/jenniolsonsf/">Jenni Olson</a> is head of social media safety at GLAAD, and discusses the lack of transparency and care around platform content policies, allowing hateful dog whistles to proliferate</li><li><a href="https://emcousens.co.uk/">Dr Emily Cousens</a>, a professor at Northeastern, who provides important context on the history of trans misogyny in the UK</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.versobooks.com/en-gb/products/3054-a-short-history-of-trans-misogyny">A Short History of Trans Misogyny</a> by Jules Gill-Peterson</li><li><a href="https://gidmk.substack.com/p/gender-dysphoria-children-and-moral?r=xdwfd&amp;triedRedirect=true&amp;utm_medium=email&amp;_hsmi=345663145&amp;utm_content=345663145&amp;utm_source=hs_email">Debunking the Cass Review</a> by Gideon MK</li><li><a href="https://glaad.org/smsi/lgbtq-social-media-safety-program/">GLAAD Social Media Safety Program</a></li><li><a href="https://www.techpolicy.press/this-is-not-normal-metas-anti-lgbtq-makeover/">Meta’s Anti-LGBT Makeover</a> by Jenni Olson</li><li>Rapid Onset Gender Dysphoria by Maintenance Phase: parts <a href="https://open.spotify.com/episode/33Lf8Fbv3poEJqzGCyB0KA?si=TB4R-AHJTe-vRb2YOCghRg">ONE</a> and <a href="https://open.spotify.com/episode/6Dy5LmKei7i7JbnNhhl00q?si=sVSsB8itTiSEXechK1_lAQ">TWO</a></li><li><a href="https://www.everythinginmoderation.co/tag/t-s-insider/">T&amp;S Insider</a> by Alice Hunsberger</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><a href="https://shivanidave.co.uk/">*SHIVANI DAVE</a> (they/them) is a political commentator and journalist whose work focuses on human rights, science and technology. SHIV is one of the organisers of the London Dyke March,  a regular collaborator with organisations; ACT UP LONDON, Queer Night Pride, local TRA, London Trans+ Pride and other more formal structures (THT, AKT, Trans+ History Week, LGBT+ History Month, NHS, <a href="https://blog.peoplevsbig.tech/">THE PEOPLE</a> ). They have written for outlets including The Guardian, BBC News, and Metro. They have appeared on Good Morning Britain, Sky News, and Jeremy Vine on 5 among others. SHIV is driven by a passion for sharing the stories of marginalised and oppressed people around the world.*</p><p><a href="https://www.linkedin.com/in/alicehunsberger/">*Alice Goguen Hunsberger</a> is a Trust &amp; Safety leader with 20+ years of experience in content moderation, CX, and building safer online communities. She heads Trust &amp; Safety at Musubi Labs, an AI company specializing in T&amp;S services. Alice got her start in 2002, running a community forum and developing its first moderation guidelines. She later led T&amp;S and CX at OkCupid, helped guide Grindr through its IPO as VP of CX &amp; T&amp;S, and drove ethical outsourcing strategies as VP of T&amp;S at PartnerHero.*</p><p><a href="https://www.linkedin.com/in/jenniolsonsf/">*Jenni Olson</a> (she/her/TBD) is Senior Director of the <a href="https://glaad.org/smsi/lgbtq-social-media-safety-program/">Social Media Safety Program</a> at national LGBTQ media advocacy organization, GLAAD. A prominent voice in the field of tech accountability, Jenni leads GLAAD’s work to hold tech companies and social media platforms accountable, and to secure safe online spaces for LGBTQ people. The GLAAD <a href="https://glaad.org/smsi/lgbtq-social-media-safety-program/">Social Media Safety Program</a> researches, monitors, and reports on a variety of issues facing LGBTQ social media users. GLAAD’s annual <a href="https://www.glaad.org/smsi">Social Media Safety Index (SMSI)</a> report evaluates the major social media platforms on LGBTQ safety, privacy, and expression. Olson has worked in LGBTQ media and tech for decades and is best known as co-founder of <a href="http://PlanetOut.com">PlanetOut.com</a>, the first major LGBTQ community website, created by a small team of tech pioneers in 1996.*</p><p><a href="https://emcousens.co.uk/">*Dr Emily Cousens</a> (They/Them) is Assistant Professor of Politics and International Relations at Northeastern University, London and the UK lead for the Digital Transgender Archive. They are the author of Trans feminist epistemologies in the US Second Wave, published by Palgrave in 2023, and their expertise are in transfeminist philosophy and history.*</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week producer Georgia joins Alix to discuss something huge that we’ve yet to go deep on: the prevalence of trans misogyny online. This episode is jam-packed with four amazing guests to guide us through this rough terrain:</p><ul><li><a href="https://shivanidave.co.uk/">Shivani Dave</a> is a journalist and commentator who uses social media for their career and income. They share their experiences with receiving hate online, and having to balance posting against hits to their mental health</li><li><a href="https://www.linkedin.com/in/alicehunsberger/">Alice Hunsberger</a> is a trust &amp; safety professional who’s worked at all levels of content moderation. She explains the technical complexities and limitations of moderating online spaces</li><li><a href="https://www.linkedin.com/in/jenniolsonsf/">Jenni Olson</a> is head of social media safety at GLAAD, and discusses the lack of transparency and care around platform content policies, allowing hateful dog whistles to proliferate</li><li><a href="https://emcousens.co.uk/">Dr Emily Cousens</a>, a professor at Northeastern, who provides important context on the history of trans misogyny in the UK</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.versobooks.com/en-gb/products/3054-a-short-history-of-trans-misogyny">A Short History of Trans Misogyny</a> by Jules Gill-Peterson</li><li><a href="https://gidmk.substack.com/p/gender-dysphoria-children-and-moral?r=xdwfd&amp;triedRedirect=true&amp;utm_medium=email&amp;_hsmi=345663145&amp;utm_content=345663145&amp;utm_source=hs_email">Debunking the Cass Review</a> by Gideon MK</li><li><a href="https://glaad.org/smsi/lgbtq-social-media-safety-program/">GLAAD Social Media Safety Program</a></li><li><a href="https://www.techpolicy.press/this-is-not-normal-metas-anti-lgbtq-makeover/">Meta’s Anti-LGBT Makeover</a> by Jenni Olson</li><li>Rapid Onset Gender Dysphoria by Maintenance Phase: parts <a href="https://open.spotify.com/episode/33Lf8Fbv3poEJqzGCyB0KA?si=TB4R-AHJTe-vRb2YOCghRg">ONE</a> and <a href="https://open.spotify.com/episode/6Dy5LmKei7i7JbnNhhl00q?si=sVSsB8itTiSEXechK1_lAQ">TWO</a></li><li><a href="https://www.everythinginmoderation.co/tag/t-s-insider/">T&amp;S Insider</a> by Alice Hunsberger</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><a href="https://shivanidave.co.uk/">*SHIVANI DAVE</a> (they/them) is a political commentator and journalist whose work focuses on human rights, science and technology. SHIV is one of the organisers of the London Dyke March,  a regular collaborator with organisations; ACT UP LONDON, Queer Night Pride, local TRA, London Trans+ Pride and other more formal structures (THT, AKT, Trans+ History Week, LGBT+ History Month, NHS, <a href="https://blog.peoplevsbig.tech/">THE PEOPLE</a> ). They have written for outlets including The Guardian, BBC News, and Metro. They have appeared on Good Morning Britain, Sky News, and Jeremy Vine on 5 among others. SHIV is driven by a passion for sharing the stories of marginalised and oppressed people around the world.*</p><p><a href="https://www.linkedin.com/in/alicehunsberger/">*Alice Goguen Hunsberger</a> is a Trust &amp; Safety leader with 20+ years of experience in content moderation, CX, and building safer online communities. She heads Trust &amp; Safety at Musubi Labs, an AI company specializing in T&amp;S services. Alice got her start in 2002, running a community forum and developing its first moderation guidelines. She later led T&amp;S and CX at OkCupid, helped guide Grindr through its IPO as VP of CX &amp; T&amp;S, and drove ethical outsourcing strategies as VP of T&amp;S at PartnerHero.*</p><p><a href="https://www.linkedin.com/in/jenniolsonsf/">*Jenni Olson</a> (she/her/TBD) is Senior Director of the <a href="https://glaad.org/smsi/lgbtq-social-media-safety-program/">Social Media Safety Program</a> at national LGBTQ media advocacy organization, GLAAD. A prominent voice in the field of tech accountability, Jenni leads GLAAD’s work to hold tech companies and social media platforms accountable, and to secure safe online spaces for LGBTQ people. The GLAAD <a href="https://glaad.org/smsi/lgbtq-social-media-safety-program/">Social Media Safety Program</a> researches, monitors, and reports on a variety of issues facing LGBTQ social media users. GLAAD’s annual <a href="https://www.glaad.org/smsi">Social Media Safety Index (SMSI)</a> report evaluates the major social media platforms on LGBTQ safety, privacy, and expression. Olson has worked in LGBTQ media and tech for decades and is best known as co-founder of <a href="http://PlanetOut.com">PlanetOut.com</a>, the first major LGBTQ community website, created by a small team of tech pioneers in 1996.*</p><p><a href="https://emcousens.co.uk/">*Dr Emily Cousens</a> (They/Them) is Assistant Professor of Politics and International Relations at Northeastern University, London and the UK lead for the Digital Transgender Archive. They are the author of Trans feminist epistemologies in the US Second Wave, published by Palgrave in 2023, and their expertise are in transfeminist philosophy and history.*</p>]]>
      </content:encoded>
      <pubDate>Fri, 14 Mar 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/8e82a726/91606da6.mp3" length="75915809" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3161</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week producer Georgia joins Alix to discuss something huge that we’ve yet to go deep on: the prevalence of trans misogyny online. This episode is jam-packed with four amazing guests to guide us through this rough terrain:</p><ul><li><a href="https://shivanidave.co.uk/">Shivani Dave</a> is a journalist and commentator who uses social media for their career and income. They share their experiences with receiving hate online, and having to balance posting against hits to their mental health</li><li><a href="https://www.linkedin.com/in/alicehunsberger/">Alice Hunsberger</a> is a trust &amp; safety professional who’s worked at all levels of content moderation. She explains the technical complexities and limitations of moderating online spaces</li><li><a href="https://www.linkedin.com/in/jenniolsonsf/">Jenni Olson</a> is head of social media safety at GLAAD, and discusses the lack of transparency and care around platform content policies, allowing hateful dog whistles to proliferate</li><li><a href="https://emcousens.co.uk/">Dr Emily Cousens</a>, a professor at Northeastern, who provides important context on the history of trans misogyny in the UK</li></ul><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.versobooks.com/en-gb/products/3054-a-short-history-of-trans-misogyny">A Short History of Trans Misogyny</a> by Jules Gill-Peterson</li><li><a href="https://gidmk.substack.com/p/gender-dysphoria-children-and-moral?r=xdwfd&amp;triedRedirect=true&amp;utm_medium=email&amp;_hsmi=345663145&amp;utm_content=345663145&amp;utm_source=hs_email">Debunking the Cass Review</a> by Gideon MK</li><li><a href="https://glaad.org/smsi/lgbtq-social-media-safety-program/">GLAAD Social Media Safety Program</a></li><li><a href="https://www.techpolicy.press/this-is-not-normal-metas-anti-lgbtq-makeover/">Meta’s Anti-LGBT Makeover</a> by Jenni Olson</li><li>Rapid Onset Gender Dysphoria by Maintenance Phase: parts <a href="https://open.spotify.com/episode/33Lf8Fbv3poEJqzGCyB0KA?si=TB4R-AHJTe-vRb2YOCghRg">ONE</a> and <a href="https://open.spotify.com/episode/6Dy5LmKei7i7JbnNhhl00q?si=sVSsB8itTiSEXechK1_lAQ">TWO</a></li><li><a href="https://www.everythinginmoderation.co/tag/t-s-insider/">T&amp;S Insider</a> by Alice Hunsberger</li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><a href="https://shivanidave.co.uk/">*SHIVANI DAVE</a> (they/them) is a political commentator and journalist whose work focuses on human rights, science and technology. SHIV is one of the organisers of the London Dyke March,  a regular collaborator with organisations; ACT UP LONDON, Queer Night Pride, local TRA, London Trans+ Pride and other more formal structures (THT, AKT, Trans+ History Week, LGBT+ History Month, NHS, <a href="https://blog.peoplevsbig.tech/">THE PEOPLE</a> ). They have written for outlets including The Guardian, BBC News, and Metro. They have appeared on Good Morning Britain, Sky News, and Jeremy Vine on 5 among others. SHIV is driven by a passion for sharing the stories of marginalised and oppressed people around the world.*</p><p><a href="https://www.linkedin.com/in/alicehunsberger/">*Alice Goguen Hunsberger</a> is a Trust &amp; Safety leader with 20+ years of experience in content moderation, CX, and building safer online communities. She heads Trust &amp; Safety at Musubi Labs, an AI company specializing in T&amp;S services. Alice got her start in 2002, running a community forum and developing its first moderation guidelines. She later led T&amp;S and CX at OkCupid, helped guide Grindr through its IPO as VP of CX &amp; T&amp;S, and drove ethical outsourcing strategies as VP of T&amp;S at PartnerHero.*</p><p><a href="https://www.linkedin.com/in/jenniolsonsf/">*Jenni Olson</a> (she/her/TBD) is Senior Director of the <a href="https://glaad.org/smsi/lgbtq-social-media-safety-program/">Social Media Safety Program</a> at national LGBTQ media advocacy organization, GLAAD. A prominent voice in the field of tech accountability, Jenni leads GLAAD’s work to hold tech companies and social media platforms accountable, and to secure safe online spaces for LGBTQ people. The GLAAD <a href="https://glaad.org/smsi/lgbtq-social-media-safety-program/">Social Media Safety Program</a> researches, monitors, and reports on a variety of issues facing LGBTQ social media users. GLAAD’s annual <a href="https://www.glaad.org/smsi">Social Media Safety Index (SMSI)</a> report evaluates the major social media platforms on LGBTQ safety, privacy, and expression. Olson has worked in LGBTQ media and tech for decades and is best known as co-founder of <a href="http://PlanetOut.com">PlanetOut.com</a>, the first major LGBTQ community website, created by a small team of tech pioneers in 1996.*</p><p><a href="https://emcousens.co.uk/">*Dr Emily Cousens</a> (They/Them) is Assistant Professor of Politics and International Relations at Northeastern University, London and the UK lead for the Digital Transgender Archive. They are the author of Trans feminist epistemologies in the US Second Wave, published by Palgrave in 2023, and their expertise are in transfeminist philosophy and history.*</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
    </item>
    <item>
      <title>VCs Are World Eaters w/ Catherine Bracy</title>
      <itunes:episode>42</itunes:episode>
      <podcast:episode>42</podcast:episode>
      <itunes:title>VCs Are World Eaters w/ Catherine Bracy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f342d9ff-8742-46dc-9b94-623fc065e547</guid>
      <link>https://share.transistor.fm/s/b15cf4ef</link>
      <description>
        <![CDATA[<p>This week Alix interviewed Catherine Bracy on her book <a href="https://www.penguinrandomhouse.com/books/723091/world-eaters-by-catherine-bracy/"><em>World Eaters: How Venture Capital is Cannibalising the Economy</em></a>. Support Catherine’s work and buy it NOW.</p><p>Venture capital wasn’t always how it is today. But now it’s a driver of inequality, political and economic instability, and insufferable personalities. How did we get here and what might come next?</p><p>In this conversation Catherine outlines her views on our current political moment and the role of VC in it. We’ve all got feelings about VCs, but in her book and in this conversation she forensically picks apart how it works, why it doesn’t <em>really</em> work, and why that’s a problem for all of us.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.penguinrandomhouse.com/books/723091/world-eaters-by-catherine-bracy/">Buy Catherine’s book</a></li><li><a href="https://techequity.us/">TechEquity Collaborative</a></li></ul><p><em>Catherine Bracy is the Founder and CEO of TechEquity, an organization doing research and advocacy on issues at the intersection of tech and economic equity to ensure the tech industry’s products and practices create opportunity instead of inequality. She is also the author of the forthcoming book, World Eaters: How Venture Capital is Cannibalizing the Economy (Dutton: March, 2025).</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week Alix interviewed Catherine Bracy on her book <a href="https://www.penguinrandomhouse.com/books/723091/world-eaters-by-catherine-bracy/"><em>World Eaters: How Venture Capital is Cannibalising the Economy</em></a>. Support Catherine’s work and buy it NOW.</p><p>Venture capital wasn’t always how it is today. But now it’s a driver of inequality, political and economic instability, and insufferable personalities. How did we get here and what might come next?</p><p>In this conversation Catherine outlines her views on our current political moment and the role of VC in it. We’ve all got feelings about VCs, but in her book and in this conversation she forensically picks apart how it works, why it doesn’t <em>really</em> work, and why that’s a problem for all of us.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.penguinrandomhouse.com/books/723091/world-eaters-by-catherine-bracy/">Buy Catherine’s book</a></li><li><a href="https://techequity.us/">TechEquity Collaborative</a></li></ul><p><em>Catherine Bracy is the Founder and CEO of TechEquity, an organization doing research and advocacy on issues at the intersection of tech and economic equity to ensure the tech industry’s products and practices create opportunity instead of inequality. She is also the author of the forthcoming book, World Eaters: How Venture Capital is Cannibalizing the Economy (Dutton: March, 2025).</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 07 Mar 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/b15cf4ef/ef1d80f7.mp3" length="68451488" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2850</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week Alix interviewed Catherine Bracy on her book <a href="https://www.penguinrandomhouse.com/books/723091/world-eaters-by-catherine-bracy/"><em>World Eaters: How Venture Capital is Cannibalising the Economy</em></a>. Support Catherine’s work and buy it NOW.</p><p>Venture capital wasn’t always how it is today. But now it’s a driver of inequality, political and economic instability, and insufferable personalities. How did we get here and what might come next?</p><p>In this conversation Catherine outlines her views on our current political moment and the role of VC in it. We’ve all got feelings about VCs, but in her book and in this conversation she forensically picks apart how it works, why it doesn’t <em>really</em> work, and why that’s a problem for all of us.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.penguinrandomhouse.com/books/723091/world-eaters-by-catherine-bracy/">Buy Catherine’s book</a></li><li><a href="https://techequity.us/">TechEquity Collaborative</a></li></ul><p><em>Catherine Bracy is the Founder and CEO of TechEquity, an organization doing research and advocacy on issues at the intersection of tech and economic equity to ensure the tech industry’s products and practices create opportunity instead of inequality. She is also the author of the forthcoming book, World Eaters: How Venture Capital is Cannibalizing the Economy (Dutton: March, 2025).</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Power Over Precision w/ Jenny Reardon</title>
      <itunes:episode>41</itunes:episode>
      <podcast:episode>41</podcast:episode>
      <itunes:title>Power Over Precision w/ Jenny Reardon</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0e6536b1-6521-4255-a661-aafe4cc523f9</guid>
      <link>https://share.transistor.fm/s/4d8cb0b0</link>
      <description>
        <![CDATA[<p>Alix’s conversation this week is with Jenny Reardon, who shares with us the history of genomics — and the absolutely mind-melting parallels it has with the trajectory of the AI industry.</p><p>Jenny describes genomics as the industrialisation of genetics; it’s not just about understanding the genetic properties of humans, but mapping out every last inch of their genetic information so that it’s machine readable and scalable and — does this remind you of anything yet?</p><p>There are a disturbing amount of correlations between AI and genomics: that they have roots in military applications; as fields they have been pumped up with money and compute; and that there are, of course, huge conceptual overlaps with race science.</p><p><em>Jenny Reardon is a Professor of Sociology and the Founding Director of the Science and Justice Research Center at the University of California, Santa Cruz.  Her research draws into focus questions about identity, justice and democracy that are often silently embedded in scientific ideas and practices.  She is the author of </em><a href="https://press.princeton.edu/books/paperback/9780691118574/race-to-the-finish"><em>Race to the Finish: Identity and Governance in an Age of Genomics</em></a><em> (Princeton University Press) and, most recently, </em><a href="http://press.uchicago.edu/ucp/books/book/chicago/P/bo22726485.html"><em>The Postgenomic Condition: Ethics, Justice, Knowledge After the Genome</em></a><em> (University of Chicago Press)</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Alix’s conversation this week is with Jenny Reardon, who shares with us the history of genomics — and the absolutely mind-melting parallels it has with the trajectory of the AI industry.</p><p>Jenny describes genomics as the industrialisation of genetics; it’s not just about understanding the genetic properties of humans, but mapping out every last inch of their genetic information so that it’s machine readable and scalable and — does this remind you of anything yet?</p><p>There are a disturbing amount of correlations between AI and genomics: that they have roots in military applications; as fields they have been pumped up with money and compute; and that there are, of course, huge conceptual overlaps with race science.</p><p><em>Jenny Reardon is a Professor of Sociology and the Founding Director of the Science and Justice Research Center at the University of California, Santa Cruz.  Her research draws into focus questions about identity, justice and democracy that are often silently embedded in scientific ideas and practices.  She is the author of </em><a href="https://press.princeton.edu/books/paperback/9780691118574/race-to-the-finish"><em>Race to the Finish: Identity and Governance in an Age of Genomics</em></a><em> (Princeton University Press) and, most recently, </em><a href="http://press.uchicago.edu/ucp/books/book/chicago/P/bo22726485.html"><em>The Postgenomic Condition: Ethics, Justice, Knowledge After the Genome</em></a><em> (University of Chicago Press)</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 28 Feb 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/4d8cb0b0/df9385f2.mp3" length="90351106" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3763</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Alix’s conversation this week is with Jenny Reardon, who shares with us the history of genomics — and the absolutely mind-melting parallels it has with the trajectory of the AI industry.</p><p>Jenny describes genomics as the industrialisation of genetics; it’s not just about understanding the genetic properties of humans, but mapping out every last inch of their genetic information so that it’s machine readable and scalable and — does this remind you of anything yet?</p><p>There are a disturbing amount of correlations between AI and genomics: that they have roots in military applications; as fields they have been pumped up with money and compute; and that there are, of course, huge conceptual overlaps with race science.</p><p><em>Jenny Reardon is a Professor of Sociology and the Founding Director of the Science and Justice Research Center at the University of California, Santa Cruz.  Her research draws into focus questions about identity, justice and democracy that are often silently embedded in scientific ideas and practices.  She is the author of </em><a href="https://press.princeton.edu/books/paperback/9780691118574/race-to-the-finish"><em>Race to the Finish: Identity and Governance in an Age of Genomics</em></a><em> (Princeton University Press) and, most recently, </em><a href="http://press.uchicago.edu/ucp/books/book/chicago/P/bo22726485.html"><em>The Postgenomic Condition: Ethics, Justice, Knowledge After the Genome</em></a><em> (University of Chicago Press)</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>The Taiwan Bottleneck w/ Brian Chen </title>
      <itunes:episode>40</itunes:episode>
      <podcast:episode>40</podcast:episode>
      <itunes:title>The Taiwan Bottleneck w/ Brian Chen </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d52cf204-b2fc-4766-8d69-0b8ba2ca0e1b</guid>
      <link>https://share.transistor.fm/s/1b4fdd1f</link>
      <description>
        <![CDATA[<p>Do you ever wonder how semiconductors (AKA chips) get made? Or why most of them are made in Taiwan? Or what this means for geopolitics?</p><p>Luckily, this is a podcast for nerds like you. Alix was joined this week by Brian Chen from Data &amp; Society, who systematically explains the process of advanced chip manufacture, how its thoroughly entangled in US economic policy, and how Taiwan’s place as the main artery for chips is the product of deep colonial infrastructures.</p><p><em>Brian J. Chen is the policy director of Data &amp; Society, leading the organization’s work to shape tech policy. With a background in movement lawyering and legislative and regulatory advocacy, he has worked extensively on issues of economic justice, political economy, and tech governance.<br></em><br></p><p><em>Previously, Brian led campaigns to strengthen the labor and employment rights of digital platform workers and other workers in precarious industries. Before that, he led programs to promote democratic accountability in policing, including community oversight over the adoption and use of police technologies.<br></em><br></p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Do you ever wonder how semiconductors (AKA chips) get made? Or why most of them are made in Taiwan? Or what this means for geopolitics?</p><p>Luckily, this is a podcast for nerds like you. Alix was joined this week by Brian Chen from Data &amp; Society, who systematically explains the process of advanced chip manufacture, how its thoroughly entangled in US economic policy, and how Taiwan’s place as the main artery for chips is the product of deep colonial infrastructures.</p><p><em>Brian J. Chen is the policy director of Data &amp; Society, leading the organization’s work to shape tech policy. With a background in movement lawyering and legislative and regulatory advocacy, he has worked extensively on issues of economic justice, political economy, and tech governance.<br></em><br></p><p><em>Previously, Brian led campaigns to strengthen the labor and employment rights of digital platform workers and other workers in precarious industries. Before that, he led programs to promote democratic accountability in policing, including community oversight over the adoption and use of police technologies.<br></em><br></p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </content:encoded>
      <pubDate>Fri, 21 Feb 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/1b4fdd1f/d793f598.mp3" length="53574589" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2230</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Do you ever wonder how semiconductors (AKA chips) get made? Or why most of them are made in Taiwan? Or what this means for geopolitics?</p><p>Luckily, this is a podcast for nerds like you. Alix was joined this week by Brian Chen from Data &amp; Society, who systematically explains the process of advanced chip manufacture, how its thoroughly entangled in US economic policy, and how Taiwan’s place as the main artery for chips is the product of deep colonial infrastructures.</p><p><em>Brian J. Chen is the policy director of Data &amp; Society, leading the organization’s work to shape tech policy. With a background in movement lawyering and legislative and regulatory advocacy, he has worked extensively on issues of economic justice, political economy, and tech governance.<br></em><br></p><p><em>Previously, Brian led campaigns to strengthen the labor and employment rights of digital platform workers and other workers in precarious industries. Before that, he led programs to promote democratic accountability in policing, including community oversight over the adoption and use of police technologies.<br></em><br></p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>AI Safety’s Spiral of Urgency w/ Shazeda Ahmed</title>
      <itunes:episode>39</itunes:episode>
      <podcast:episode>39</podcast:episode>
      <itunes:title>AI Safety’s Spiral of Urgency w/ Shazeda Ahmed</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2f7a5d33-ca3d-4635-bd41-855023ec3e04</guid>
      <link>https://share.transistor.fm/s/107865ff</link>
      <description>
        <![CDATA[<p>Are you tired of hearing the phrase ‘AI Safety’ and rolling your eyes? Do you also sometimes think… okay but what is <em>technically</em> wrong with advocating for ‘safer’ AI systems? Do you also wish we could have more nuanced conversations about China and AI?</p><p>In this episode Shazeda Ahmed goes deep on the field of AI Safety, explaining that it is a community that is propped up by its own spiral of reproduced urgency; and that so much of it is rooted in American anti-China sentiment. Read: the fear that the big scary authoritarian country will build AGI before the US does, and destroy us all.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.article19.org/wp-content/uploads/2021/01/ER-Tech-China-Report.pdf">Emotional Entanglement</a> — Article 19</li><li><a href="https://www.accessnow.org/wp-content/uploads/2023/10/Bodily-harms-mapping-the-risks-of-emerging-biometric-tech.pdf">Bodily Harms</a> by Xiaowei Wang and Shazeda Ahmed for Access Now</li><li><a href="https://firstmonday.org/ojs/index.php/fm/article/view/13626">Field-building and the epistemic culture of AI safety</a> — First Monday</li><li><a href="https://madeinchinajournal.com/">Made in China</a> journal</li><li><a href="https://pauseai.info/">Pause AI</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Shazeda Ahmed is a Chancellor’s Postdoctoral fellow at the University of California, Los Angeles. Shazeda completed her Ph.D. at UC Berkeley’s School of Information in 2022, and was previously a postdoctoral research fellow at Princeton University’s Center for Information Technology Policy. She has been a research fellow at Upturn, the Mercator Institute for China Studies, the University of Toronto's Citizen Lab, Stanford University’s Human-Centered Artificial Intelligence (HAI) Institute, and NYU's AI Now Institute.<br></em><br></p><p><em>Shazeda’s research investigates relationships between the state, the firm, and society in the US-China geopolitical rivalry over AI, with implications for information technology policy and human rights. Her work draws from science and technology studies, ranging from her dissertation on the state-firm co-production of China’s social credit system, to her research on the epistemic culture of the emerging field of AI safety.</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Are you tired of hearing the phrase ‘AI Safety’ and rolling your eyes? Do you also sometimes think… okay but what is <em>technically</em> wrong with advocating for ‘safer’ AI systems? Do you also wish we could have more nuanced conversations about China and AI?</p><p>In this episode Shazeda Ahmed goes deep on the field of AI Safety, explaining that it is a community that is propped up by its own spiral of reproduced urgency; and that so much of it is rooted in American anti-China sentiment. Read: the fear that the big scary authoritarian country will build AGI before the US does, and destroy us all.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.article19.org/wp-content/uploads/2021/01/ER-Tech-China-Report.pdf">Emotional Entanglement</a> — Article 19</li><li><a href="https://www.accessnow.org/wp-content/uploads/2023/10/Bodily-harms-mapping-the-risks-of-emerging-biometric-tech.pdf">Bodily Harms</a> by Xiaowei Wang and Shazeda Ahmed for Access Now</li><li><a href="https://firstmonday.org/ojs/index.php/fm/article/view/13626">Field-building and the epistemic culture of AI safety</a> — First Monday</li><li><a href="https://madeinchinajournal.com/">Made in China</a> journal</li><li><a href="https://pauseai.info/">Pause AI</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Shazeda Ahmed is a Chancellor’s Postdoctoral fellow at the University of California, Los Angeles. Shazeda completed her Ph.D. at UC Berkeley’s School of Information in 2022, and was previously a postdoctoral research fellow at Princeton University’s Center for Information Technology Policy. She has been a research fellow at Upturn, the Mercator Institute for China Studies, the University of Toronto's Citizen Lab, Stanford University’s Human-Centered Artificial Intelligence (HAI) Institute, and NYU's AI Now Institute.<br></em><br></p><p><em>Shazeda’s research investigates relationships between the state, the firm, and society in the US-China geopolitical rivalry over AI, with implications for information technology policy and human rights. Her work draws from science and technology studies, ranging from her dissertation on the state-firm co-production of China’s social credit system, to her research on the epistemic culture of the emerging field of AI safety.</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 14 Feb 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/107865ff/e8ac320d.mp3" length="80581055" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3355</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Are you tired of hearing the phrase ‘AI Safety’ and rolling your eyes? Do you also sometimes think… okay but what is <em>technically</em> wrong with advocating for ‘safer’ AI systems? Do you also wish we could have more nuanced conversations about China and AI?</p><p>In this episode Shazeda Ahmed goes deep on the field of AI Safety, explaining that it is a community that is propped up by its own spiral of reproduced urgency; and that so much of it is rooted in American anti-China sentiment. Read: the fear that the big scary authoritarian country will build AGI before the US does, and destroy us all.</p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.article19.org/wp-content/uploads/2021/01/ER-Tech-China-Report.pdf">Emotional Entanglement</a> — Article 19</li><li><a href="https://www.accessnow.org/wp-content/uploads/2023/10/Bodily-harms-mapping-the-risks-of-emerging-biometric-tech.pdf">Bodily Harms</a> by Xiaowei Wang and Shazeda Ahmed for Access Now</li><li><a href="https://firstmonday.org/ojs/index.php/fm/article/view/13626">Field-building and the epistemic culture of AI safety</a> — First Monday</li><li><a href="https://madeinchinajournal.com/">Made in China</a> journal</li><li><a href="https://pauseai.info/">Pause AI</a></li></ul><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><em>Shazeda Ahmed is a Chancellor’s Postdoctoral fellow at the University of California, Los Angeles. Shazeda completed her Ph.D. at UC Berkeley’s School of Information in 2022, and was previously a postdoctoral research fellow at Princeton University’s Center for Information Technology Policy. She has been a research fellow at Upturn, the Mercator Institute for China Studies, the University of Toronto's Citizen Lab, Stanford University’s Human-Centered Artificial Intelligence (HAI) Institute, and NYU's AI Now Institute.<br></em><br></p><p><em>Shazeda’s research investigates relationships between the state, the firm, and society in the US-China geopolitical rivalry over AI, with implications for information technology policy and human rights. Her work draws from science and technology studies, ranging from her dissertation on the state-firm co-production of China’s social credit system, to her research on the epistemic culture of the emerging field of AI safety.</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Live Show: Paris Post-Mortem</title>
      <itunes:episode>38</itunes:episode>
      <podcast:episode>38</podcast:episode>
      <itunes:title>Live Show: Paris Post-Mortem</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ca856f23-35fe-4c62-8ed0-28b79de8ee8d</guid>
      <link>https://share.transistor.fm/s/ced46495</link>
      <description>
        <![CDATA[<p>Kapow! We just did our first ever LIVE SHOW. We barely had time to let the mics cool down before a bunch of you requested to have the recording on our pod feed so here we are.</p><p><strong>ICYMI</strong>: this is a recording from the live show that we did in Paris, right after the AI Action Summit. Alix sat down to have a candid conversation about the summit, and pontificate on what people might have meant when they kept saying ‘public interest AI’ over and over. She was joined by four of the best women in AI politics:</p><ul><li><a href="https://www.linkedin.com/in/astha-kapoor-020b4760/">Astha Kapoor</a>, Co-Founder for the Aapti Institute</li><li><a href="https://www.linkedin.com/in/amba-kak-48569b108/">Amba Kak</a>, Executive Director of the AI Now Institute</li><li><a href="https://www.linkedin.com/in/abeba-birhane-214b8426/">Abeba Birhane</a>, Founder &amp; Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL)</li><li><a href="https://www.linkedin.com/in/nabiha-syed-3433014b/">Nabiha Syed</a>, Executive Director of Mozilla</li></ul><p>If audio is not enough for you, go ahead and <a href="https://www.youtube.com/live/9fdArtIvFYw?si=OuOp_UaIlJnzyHHg">watch the show on YouTube</a></p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><a href="https://www.linkedin.com/in/astha-kapoor-020b4760/">*Astha Kapoor</a> is the Co-founder of Aapti Institute, a Bangalore based research firm that works on the intersection of technology and society. She has 15 years of public policy and strategy consulting experience, with a focus on use of technology for welfare. Astha works on participative governance of data, and digital public infrastructure. She’s a member of World Economic Forum Global Future Council on data equity (2023-24), visiting fellow at the Ostrom Workshop (Indiana University). She was also a member of the Think20 taskforce on digital public infrastructure during India and Brazil's G20 presidency and is currently on the board of Global Partnership for Sustainable Data.*</p><p><a href="https://www.linkedin.com/in/amba-kak-48569b108/">*Amba Kak</a> has spent the last fifteen years designing and advocating for technology policy in the public interest, across government, industry, and civil society roles – and in many parts of the world. Amba brings this experience to her current role co-directing AI Now, a New York-based research institute where she leads on advancing diagnosis and actionable policy to tackle concerns with artificial intelligence and concentrated power. She has served as Senior Advisor on AI to the Federal Trade Commission and was recognized as one of TIME’s 100 Most Influential People in AI in 2024.*</p><p><a href="https://www.linkedin.com/in/abeba-birhane-214b8426/">*Dr. Abeba Birhane</a> founded and leads the TCD AI Accountability Lab (AIAL). Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in <a href="https://www.wired.com/story/abeba-birhane-ai-datasets/">Wired UK</a> and TIME on the TIME100 Most Influential People in AI list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s AI Advisory Body and currently serves at the AI Advisory Council in Ireland.*</p><p><a href="https://www.linkedin.com/in/nabiha-syed-3433014b/">*Nabiha Syed </a>is the Executive Director of the Mozilla Foundation, the global nonprofit that does everything from championing trustworthy AI to advocating for a more open, equitable internet. Prior to joining Mozilla, she was CEO of The Markup, an award-winning journalism non-profit that challenges technology to serve the public good. Before launching The Markup in 2020, Nabiha spent a decade as an acclaimed media lawyer focused on the intersection of frontier technology and newsgathering, including advising on publication issues with the Snowden revelations and the Steele Dossier, access litigation around police disciplinary records, and privacy and free speech issues globally. In 2023, Naibha was awarded the NAACP/Archewell Digital Civil Rights Award for her work.*</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Kapow! We just did our first ever LIVE SHOW. We barely had time to let the mics cool down before a bunch of you requested to have the recording on our pod feed so here we are.</p><p><strong>ICYMI</strong>: this is a recording from the live show that we did in Paris, right after the AI Action Summit. Alix sat down to have a candid conversation about the summit, and pontificate on what people might have meant when they kept saying ‘public interest AI’ over and over. She was joined by four of the best women in AI politics:</p><ul><li><a href="https://www.linkedin.com/in/astha-kapoor-020b4760/">Astha Kapoor</a>, Co-Founder for the Aapti Institute</li><li><a href="https://www.linkedin.com/in/amba-kak-48569b108/">Amba Kak</a>, Executive Director of the AI Now Institute</li><li><a href="https://www.linkedin.com/in/abeba-birhane-214b8426/">Abeba Birhane</a>, Founder &amp; Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL)</li><li><a href="https://www.linkedin.com/in/nabiha-syed-3433014b/">Nabiha Syed</a>, Executive Director of Mozilla</li></ul><p>If audio is not enough for you, go ahead and <a href="https://www.youtube.com/live/9fdArtIvFYw?si=OuOp_UaIlJnzyHHg">watch the show on YouTube</a></p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><a href="https://www.linkedin.com/in/astha-kapoor-020b4760/">*Astha Kapoor</a> is the Co-founder of Aapti Institute, a Bangalore based research firm that works on the intersection of technology and society. She has 15 years of public policy and strategy consulting experience, with a focus on use of technology for welfare. Astha works on participative governance of data, and digital public infrastructure. She’s a member of World Economic Forum Global Future Council on data equity (2023-24), visiting fellow at the Ostrom Workshop (Indiana University). She was also a member of the Think20 taskforce on digital public infrastructure during India and Brazil's G20 presidency and is currently on the board of Global Partnership for Sustainable Data.*</p><p><a href="https://www.linkedin.com/in/amba-kak-48569b108/">*Amba Kak</a> has spent the last fifteen years designing and advocating for technology policy in the public interest, across government, industry, and civil society roles – and in many parts of the world. Amba brings this experience to her current role co-directing AI Now, a New York-based research institute where she leads on advancing diagnosis and actionable policy to tackle concerns with artificial intelligence and concentrated power. She has served as Senior Advisor on AI to the Federal Trade Commission and was recognized as one of TIME’s 100 Most Influential People in AI in 2024.*</p><p><a href="https://www.linkedin.com/in/abeba-birhane-214b8426/">*Dr. Abeba Birhane</a> founded and leads the TCD AI Accountability Lab (AIAL). Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in <a href="https://www.wired.com/story/abeba-birhane-ai-datasets/">Wired UK</a> and TIME on the TIME100 Most Influential People in AI list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s AI Advisory Body and currently serves at the AI Advisory Council in Ireland.*</p><p><a href="https://www.linkedin.com/in/nabiha-syed-3433014b/">*Nabiha Syed </a>is the Executive Director of the Mozilla Foundation, the global nonprofit that does everything from championing trustworthy AI to advocating for a more open, equitable internet. Prior to joining Mozilla, she was CEO of The Markup, an award-winning journalism non-profit that challenges technology to serve the public good. Before launching The Markup in 2020, Nabiha spent a decade as an acclaimed media lawyer focused on the intersection of frontier technology and newsgathering, including advising on publication issues with the Snowden revelations and the Steele Dossier, access litigation around police disciplinary records, and privacy and free speech issues globally. In 2023, Naibha was awarded the NAACP/Archewell Digital Civil Rights Award for her work.*</p>]]>
      </content:encoded>
      <pubDate>Wed, 12 Feb 2025 12:35:22 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/ced46495/82805cc3.mp3" length="67225274" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2799</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Kapow! We just did our first ever LIVE SHOW. We barely had time to let the mics cool down before a bunch of you requested to have the recording on our pod feed so here we are.</p><p><strong>ICYMI</strong>: this is a recording from the live show that we did in Paris, right after the AI Action Summit. Alix sat down to have a candid conversation about the summit, and pontificate on what people might have meant when they kept saying ‘public interest AI’ over and over. She was joined by four of the best women in AI politics:</p><ul><li><a href="https://www.linkedin.com/in/astha-kapoor-020b4760/">Astha Kapoor</a>, Co-Founder for the Aapti Institute</li><li><a href="https://www.linkedin.com/in/amba-kak-48569b108/">Amba Kak</a>, Executive Director of the AI Now Institute</li><li><a href="https://www.linkedin.com/in/abeba-birhane-214b8426/">Abeba Birhane</a>, Founder &amp; Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL)</li><li><a href="https://www.linkedin.com/in/nabiha-syed-3433014b/">Nabiha Syed</a>, Executive Director of Mozilla</li></ul><p>If audio is not enough for you, go ahead and <a href="https://www.youtube.com/live/9fdArtIvFYw?si=OuOp_UaIlJnzyHHg">watch the show on YouTube</a></p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p><a href="https://www.linkedin.com/in/astha-kapoor-020b4760/">*Astha Kapoor</a> is the Co-founder of Aapti Institute, a Bangalore based research firm that works on the intersection of technology and society. She has 15 years of public policy and strategy consulting experience, with a focus on use of technology for welfare. Astha works on participative governance of data, and digital public infrastructure. She’s a member of World Economic Forum Global Future Council on data equity (2023-24), visiting fellow at the Ostrom Workshop (Indiana University). She was also a member of the Think20 taskforce on digital public infrastructure during India and Brazil's G20 presidency and is currently on the board of Global Partnership for Sustainable Data.*</p><p><a href="https://www.linkedin.com/in/amba-kak-48569b108/">*Amba Kak</a> has spent the last fifteen years designing and advocating for technology policy in the public interest, across government, industry, and civil society roles – and in many parts of the world. Amba brings this experience to her current role co-directing AI Now, a New York-based research institute where she leads on advancing diagnosis and actionable policy to tackle concerns with artificial intelligence and concentrated power. She has served as Senior Advisor on AI to the Federal Trade Commission and was recognized as one of TIME’s 100 Most Influential People in AI in 2024.*</p><p><a href="https://www.linkedin.com/in/abeba-birhane-214b8426/">*Dr. Abeba Birhane</a> founded and leads the TCD AI Accountability Lab (AIAL). Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in <a href="https://www.wired.com/story/abeba-birhane-ai-datasets/">Wired UK</a> and TIME on the TIME100 Most Influential People in AI list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s AI Advisory Body and currently serves at the AI Advisory Council in Ireland.*</p><p><a href="https://www.linkedin.com/in/nabiha-syed-3433014b/">*Nabiha Syed </a>is the Executive Director of the Mozilla Foundation, the global nonprofit that does everything from championing trustworthy AI to advocating for a more open, equitable internet. Prior to joining Mozilla, she was CEO of The Markup, an award-winning journalism non-profit that challenges technology to serve the public good. Before launching The Markup in 2020, Nabiha spent a decade as an acclaimed media lawyer focused on the intersection of frontier technology and newsgathering, including advising on publication issues with the Snowden revelations and the Steele Dossier, access litigation around police disciplinary records, and privacy and free speech issues globally. In 2023, Naibha was awarded the NAACP/Archewell Digital Civil Rights Award for her work.*</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Defying Datafication w/ Dr Abeba Birhane (PLUS: Paris AI Action Summit)</title>
      <itunes:episode>37</itunes:episode>
      <podcast:episode>37</podcast:episode>
      <itunes:title>Defying Datafication w/ Dr Abeba Birhane (PLUS: Paris AI Action Summit)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">cf41ec96-1335-4ee1-a207-8c560a9ebb9b</guid>
      <link>https://share.transistor.fm/s/3ba77f44</link>
      <description>
        <![CDATA[<p>The Paris AI Action Summit is just around the corner! If you’re not going to be there, and you wish you were — we got you.</p><p><strong>We are streaming next week’s podcast LIVE from Paris on YouTube — </strong><a href="https://lu.ma/6jxa16jc"><strong>register here</strong></a><strong>🎙️<br></strong><br></p><p>On <strong>Tuesday, February 11th</strong>, at <strong>6:30pm Paris time / 12:30pm EST</strong>, we’ll be recording our <strong>first-ever LIVE podcast episode</strong>. After two days at the French AI Action Summit, Alix will sit down with four of the best women in AI politics to break down the power and politics of the Summit. It’s our <strong>Paris Post-Mortem</strong> — and we’re live-streaming the whole conversation.</p><p>We’ll hear from:</p><ul><li><a href="https://www.linkedin.com/in/astha-kapoor-020b4760/">Astha Kapoor</a>, Co-Founder for the Aapti Institute</li><li><a href="https://www.linkedin.com/in/amba-kak-48569b108/">Amba Kak</a>, Executive Director of the AI Now Institute</li><li><a href="https://www.linkedin.com/in/abeba-birhane-214b8426/">Abeba Birhane</a>, Founder &amp; Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL)</li><li><a href="https://www.linkedin.com/in/nabiha-syed-3433014b/">Nabiha Syed</a>, Executive Director of Mozilla</li></ul><p>This is our <strong>first-ever live-streamed podcast</strong>, and we’d love a great community turnout. <a href="https://lu.ma/6jxa16jc">Join the stream on Tuesday</a> and share it with anyone else who wants the hot of the press review of what happens in Paris.</p><p>And, today’s episode is abundant with treats to prime you for the summit: Alix checks in with <a href="https://www.linkedin.com/in/martin-tisne/">Martin Tisne</a> who is the special envoy to the Public Interest AI track to ask him about how he feels about the upcoming summit, and what he hopes it will achieve.</p><p>We also hear from <a href="https://www.linkedin.com/in/michelle-thorne-9a0289270/">Michelle Thorne</a>, of Green Web Foundation about a joint statement on the environmental impacts of AI she’s hoping can focus the energy of the summit towards planetary limits and decarbonisation of AI. Learn about why and how she put this together and how she’s hoping to start reasonable conversations about how AI is a complete and utter energy vampire.</p><p>Then we have Dr. Abeba Birhane — who will also be at our live show next week — to share her experiences launching the AI Accountability Lab at Trinity College in Dublin. Abeba’s work pushes to actually research AI systems before we make claims about them. In a world of industry marketing spin, Abeba is a voice of reason. As a cognitive scientist who studies people she also cautions against the impossible and tantalising idea that we can somehow datafy human complexity.</p><p><strong>Further Reading &amp; Resources:</strong></p><ul><li><a href="https://arxiv.org/abs/2401.14462">**AI auditing: The Broken Bus on the Road to AI Accountability</a>** by <a href="https://arxiv.org/search/cs?searchtype=author&amp;query=Birhane,+A">Abeba Birhane</a>, <a href="https://arxiv.org/search/cs?searchtype=author&amp;query=Steed,+R">Ryan Steed</a>, <a href="https://arxiv.org/search/cs?searchtype=author&amp;query=Ojewale,+V">Victor Ojewale</a>, <a href="https://arxiv.org/search/cs?searchtype=author&amp;query=Vecchione,+B">Briana Vecchione</a>, <a href="https://arxiv.org/search/cs?searchtype=author&amp;query=Raji,+I+D">Inioluwa Deborah Raji</a></li><li><a href="https://aial.ie/">AI Accountability Lab</a></li><li><a href="https://www.tcd.ie/scss/news/2024/ai-accountability-lab/">Press release outlining the Lab’s launch last year</a> — Trinity College</li><li><a href="https://www.elysee.fr/en/sommet-pour-l-action-sur-l-ia">The Artificial Intelligence Action Summit</a></li><li><a href="https://pad.criticalinfralab.net/k4q_nLkITfmfaVHu7_sptQ?view#">Within Bounds: Limiting AI’s Environmental Impact</a> — led by Michelle Thorne from the Green Web Foundation</li><li><a href="https://www.youtube.com/channel/UCzhuxyB3ynjRFxa7izGg8Cg">Our Youtube Channel</a></li></ul><p>Dr Abeba Birhane founded and leads the TCD <a href="https://aial.ie/">AI Accountability Lab (AIAL)</a>. Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in <a href="https://www.wired.com/story/abeba-birhane-ai-datasets/">Wired UK</a> and TIME on the <a href="https://time.com/6311323/how-we-chose-time100-ai/">TIME100 Most Influential People in AI</a> list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s <a href="https://www.un.org/techenvoy/ai-advisory-body">AI Advisory Body</a> and currently serves at the AI Advisory Council in Ireland.</p><p>Martin Tisné is Thematic Envoy to the AI Action Summit, in charge of all deliverables related to Public Interest AI. He also leads the AI Collaborative, an initiative of The Omidyar Group created to help regulate artificial intelligence based on democratic values and principles and ensure the public has a voice in that regulation. He founded the Open Government Partnership (OGP) alongside the Obama White House and helped OGP grow to a 70+ country initiative. He also initiated the International Open Data Charter, the G7 Open Data Charter, and the G20’s commitment to open data principles.</p><p>Michelle Thorne (@thornet) is working towards a fossil-free internet as the Director of Strategy at the <a href="https://www.thegreenwebfoundation.org/">Green Web Foundation</a>. She’s a co-initiator of the Green Screen Coalition for digital rights and climate justice and a visiting professor at Northumbria University. Michelle publishes <a href="https://branch.climateaction.tech/"><em>Branch</em></a>, an online magazine written by and for people who dream about a sustainable internet, which received the <a href="https://calls.ars.electronica.art/prix/winners/8535/">Ars Electronica Award for Digital Humanities</a> in 2021.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>The Paris AI Action Summit is just around the corner! If you’re not going to be there, and you wish you were — we got you.</p><p><strong>We are streaming next week’s podcast LIVE from Paris on YouTube — </strong><a href="https://lu.ma/6jxa16jc"><strong>register here</strong></a><strong>🎙️<br></strong><br></p><p>On <strong>Tuesday, February 11th</strong>, at <strong>6:30pm Paris time / 12:30pm EST</strong>, we’ll be recording our <strong>first-ever LIVE podcast episode</strong>. After two days at the French AI Action Summit, Alix will sit down with four of the best women in AI politics to break down the power and politics of the Summit. It’s our <strong>Paris Post-Mortem</strong> — and we’re live-streaming the whole conversation.</p><p>We’ll hear from:</p><ul><li><a href="https://www.linkedin.com/in/astha-kapoor-020b4760/">Astha Kapoor</a>, Co-Founder for the Aapti Institute</li><li><a href="https://www.linkedin.com/in/amba-kak-48569b108/">Amba Kak</a>, Executive Director of the AI Now Institute</li><li><a href="https://www.linkedin.com/in/abeba-birhane-214b8426/">Abeba Birhane</a>, Founder &amp; Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL)</li><li><a href="https://www.linkedin.com/in/nabiha-syed-3433014b/">Nabiha Syed</a>, Executive Director of Mozilla</li></ul><p>This is our <strong>first-ever live-streamed podcast</strong>, and we’d love a great community turnout. <a href="https://lu.ma/6jxa16jc">Join the stream on Tuesday</a> and share it with anyone else who wants the hot of the press review of what happens in Paris.</p><p>And, today’s episode is abundant with treats to prime you for the summit: Alix checks in with <a href="https://www.linkedin.com/in/martin-tisne/">Martin Tisne</a> who is the special envoy to the Public Interest AI track to ask him about how he feels about the upcoming summit, and what he hopes it will achieve.</p><p>We also hear from <a href="https://www.linkedin.com/in/michelle-thorne-9a0289270/">Michelle Thorne</a>, of Green Web Foundation about a joint statement on the environmental impacts of AI she’s hoping can focus the energy of the summit towards planetary limits and decarbonisation of AI. Learn about why and how she put this together and how she’s hoping to start reasonable conversations about how AI is a complete and utter energy vampire.</p><p>Then we have Dr. Abeba Birhane — who will also be at our live show next week — to share her experiences launching the AI Accountability Lab at Trinity College in Dublin. Abeba’s work pushes to actually research AI systems before we make claims about them. In a world of industry marketing spin, Abeba is a voice of reason. As a cognitive scientist who studies people she also cautions against the impossible and tantalising idea that we can somehow datafy human complexity.</p><p><strong>Further Reading &amp; Resources:</strong></p><ul><li><a href="https://arxiv.org/abs/2401.14462">**AI auditing: The Broken Bus on the Road to AI Accountability</a>** by <a href="https://arxiv.org/search/cs?searchtype=author&amp;query=Birhane,+A">Abeba Birhane</a>, <a href="https://arxiv.org/search/cs?searchtype=author&amp;query=Steed,+R">Ryan Steed</a>, <a href="https://arxiv.org/search/cs?searchtype=author&amp;query=Ojewale,+V">Victor Ojewale</a>, <a href="https://arxiv.org/search/cs?searchtype=author&amp;query=Vecchione,+B">Briana Vecchione</a>, <a href="https://arxiv.org/search/cs?searchtype=author&amp;query=Raji,+I+D">Inioluwa Deborah Raji</a></li><li><a href="https://aial.ie/">AI Accountability Lab</a></li><li><a href="https://www.tcd.ie/scss/news/2024/ai-accountability-lab/">Press release outlining the Lab’s launch last year</a> — Trinity College</li><li><a href="https://www.elysee.fr/en/sommet-pour-l-action-sur-l-ia">The Artificial Intelligence Action Summit</a></li><li><a href="https://pad.criticalinfralab.net/k4q_nLkITfmfaVHu7_sptQ?view#">Within Bounds: Limiting AI’s Environmental Impact</a> — led by Michelle Thorne from the Green Web Foundation</li><li><a href="https://www.youtube.com/channel/UCzhuxyB3ynjRFxa7izGg8Cg">Our Youtube Channel</a></li></ul><p>Dr Abeba Birhane founded and leads the TCD <a href="https://aial.ie/">AI Accountability Lab (AIAL)</a>. Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in <a href="https://www.wired.com/story/abeba-birhane-ai-datasets/">Wired UK</a> and TIME on the <a href="https://time.com/6311323/how-we-chose-time100-ai/">TIME100 Most Influential People in AI</a> list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s <a href="https://www.un.org/techenvoy/ai-advisory-body">AI Advisory Body</a> and currently serves at the AI Advisory Council in Ireland.</p><p>Martin Tisné is Thematic Envoy to the AI Action Summit, in charge of all deliverables related to Public Interest AI. He also leads the AI Collaborative, an initiative of The Omidyar Group created to help regulate artificial intelligence based on democratic values and principles and ensure the public has a voice in that regulation. He founded the Open Government Partnership (OGP) alongside the Obama White House and helped OGP grow to a 70+ country initiative. He also initiated the International Open Data Charter, the G7 Open Data Charter, and the G20’s commitment to open data principles.</p><p>Michelle Thorne (@thornet) is working towards a fossil-free internet as the Director of Strategy at the <a href="https://www.thegreenwebfoundation.org/">Green Web Foundation</a>. She’s a co-initiator of the Green Screen Coalition for digital rights and climate justice and a visiting professor at Northumbria University. Michelle publishes <a href="https://branch.climateaction.tech/"><em>Branch</em></a>, an online magazine written by and for people who dream about a sustainable internet, which received the <a href="https://calls.ars.electronica.art/prix/winners/8535/">Ars Electronica Award for Digital Humanities</a> in 2021.</p>]]>
      </content:encoded>
      <pubDate>Fri, 07 Feb 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/3ba77f44/98596903.mp3" length="91877154" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3826</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>The Paris AI Action Summit is just around the corner! If you’re not going to be there, and you wish you were — we got you.</p><p><strong>We are streaming next week’s podcast LIVE from Paris on YouTube — </strong><a href="https://lu.ma/6jxa16jc"><strong>register here</strong></a><strong>🎙️<br></strong><br></p><p>On <strong>Tuesday, February 11th</strong>, at <strong>6:30pm Paris time / 12:30pm EST</strong>, we’ll be recording our <strong>first-ever LIVE podcast episode</strong>. After two days at the French AI Action Summit, Alix will sit down with four of the best women in AI politics to break down the power and politics of the Summit. It’s our <strong>Paris Post-Mortem</strong> — and we’re live-streaming the whole conversation.</p><p>We’ll hear from:</p><ul><li><a href="https://www.linkedin.com/in/astha-kapoor-020b4760/">Astha Kapoor</a>, Co-Founder for the Aapti Institute</li><li><a href="https://www.linkedin.com/in/amba-kak-48569b108/">Amba Kak</a>, Executive Director of the AI Now Institute</li><li><a href="https://www.linkedin.com/in/abeba-birhane-214b8426/">Abeba Birhane</a>, Founder &amp; Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL)</li><li><a href="https://www.linkedin.com/in/nabiha-syed-3433014b/">Nabiha Syed</a>, Executive Director of Mozilla</li></ul><p>This is our <strong>first-ever live-streamed podcast</strong>, and we’d love a great community turnout. <a href="https://lu.ma/6jxa16jc">Join the stream on Tuesday</a> and share it with anyone else who wants the hot of the press review of what happens in Paris.</p><p>And, today’s episode is abundant with treats to prime you for the summit: Alix checks in with <a href="https://www.linkedin.com/in/martin-tisne/">Martin Tisne</a> who is the special envoy to the Public Interest AI track to ask him about how he feels about the upcoming summit, and what he hopes it will achieve.</p><p>We also hear from <a href="https://www.linkedin.com/in/michelle-thorne-9a0289270/">Michelle Thorne</a>, of Green Web Foundation about a joint statement on the environmental impacts of AI she’s hoping can focus the energy of the summit towards planetary limits and decarbonisation of AI. Learn about why and how she put this together and how she’s hoping to start reasonable conversations about how AI is a complete and utter energy vampire.</p><p>Then we have Dr. Abeba Birhane — who will also be at our live show next week — to share her experiences launching the AI Accountability Lab at Trinity College in Dublin. Abeba’s work pushes to actually research AI systems before we make claims about them. In a world of industry marketing spin, Abeba is a voice of reason. As a cognitive scientist who studies people she also cautions against the impossible and tantalising idea that we can somehow datafy human complexity.</p><p><strong>Further Reading &amp; Resources:</strong></p><ul><li><a href="https://arxiv.org/abs/2401.14462">**AI auditing: The Broken Bus on the Road to AI Accountability</a>** by <a href="https://arxiv.org/search/cs?searchtype=author&amp;query=Birhane,+A">Abeba Birhane</a>, <a href="https://arxiv.org/search/cs?searchtype=author&amp;query=Steed,+R">Ryan Steed</a>, <a href="https://arxiv.org/search/cs?searchtype=author&amp;query=Ojewale,+V">Victor Ojewale</a>, <a href="https://arxiv.org/search/cs?searchtype=author&amp;query=Vecchione,+B">Briana Vecchione</a>, <a href="https://arxiv.org/search/cs?searchtype=author&amp;query=Raji,+I+D">Inioluwa Deborah Raji</a></li><li><a href="https://aial.ie/">AI Accountability Lab</a></li><li><a href="https://www.tcd.ie/scss/news/2024/ai-accountability-lab/">Press release outlining the Lab’s launch last year</a> — Trinity College</li><li><a href="https://www.elysee.fr/en/sommet-pour-l-action-sur-l-ia">The Artificial Intelligence Action Summit</a></li><li><a href="https://pad.criticalinfralab.net/k4q_nLkITfmfaVHu7_sptQ?view#">Within Bounds: Limiting AI’s Environmental Impact</a> — led by Michelle Thorne from the Green Web Foundation</li><li><a href="https://www.youtube.com/channel/UCzhuxyB3ynjRFxa7izGg8Cg">Our Youtube Channel</a></li></ul><p>Dr Abeba Birhane founded and leads the TCD <a href="https://aial.ie/">AI Accountability Lab (AIAL)</a>. Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in <a href="https://www.wired.com/story/abeba-birhane-ai-datasets/">Wired UK</a> and TIME on the <a href="https://time.com/6311323/how-we-chose-time100-ai/">TIME100 Most Influential People in AI</a> list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s <a href="https://www.un.org/techenvoy/ai-advisory-body">AI Advisory Body</a> and currently serves at the AI Advisory Council in Ireland.</p><p>Martin Tisné is Thematic Envoy to the AI Action Summit, in charge of all deliverables related to Public Interest AI. He also leads the AI Collaborative, an initiative of The Omidyar Group created to help regulate artificial intelligence based on democratic values and principles and ensure the public has a voice in that regulation. He founded the Open Government Partnership (OGP) alongside the Obama White House and helped OGP grow to a 70+ country initiative. He also initiated the International Open Data Charter, the G7 Open Data Charter, and the G20’s commitment to open data principles.</p><p>Michelle Thorne (@thornet) is working towards a fossil-free internet as the Director of Strategy at the <a href="https://www.thegreenwebfoundation.org/">Green Web Foundation</a>. She’s a co-initiator of the Green Screen Coalition for digital rights and climate justice and a visiting professor at Northumbria University. Michelle publishes <a href="https://branch.climateaction.tech/"><em>Branch</em></a>, an online magazine written by and for people who dream about a sustainable internet, which received the <a href="https://calls.ars.electronica.art/prix/winners/8535/">Ars Electronica Award for Digital Humanities</a> in 2021.</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>DEI Season Finale: Part Two</title>
      <itunes:episode>36</itunes:episode>
      <podcast:episode>36</podcast:episode>
      <itunes:title>DEI Season Finale: Part Two</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6addf87a-940c-46a6-9f94-500825b41296</guid>
      <link>https://share.transistor.fm/s/5f219b3a</link>
      <description>
        <![CDATA[<p>This week Alix continues her conversation with Hanna McCloskey and Rubie Clarke from Fearless Futures and we take a whistle-stop tour of the past 5 years. We start in 2020 with the disingenuous but huge embrace of DEI work by tech companies, to 2025 when those same companies are part of massive movements actively campaigning against it.</p><p>The pair share what it was like running a DEI consultancy in the months and years following the murder of George Floyd — when DEI was suddenly on the agenda for a lot organisations. The performative and ineffective methods that DEI is famous for (endless canape receptions!) has also given the inevitable backlash easy pickings for mockery and vilification.</p><p>The news is happening so fast, but these DEI episodes can hopefully help listeners better understand the backlash, not just to DEI, but to any attempts to correct systemic inequity in society.</p><p><a href="https://www.saysmaybe.com/newsletter"><strong>Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</strong></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.fearlessfutures.org/">Fearless Futures</a></li><li><a href="https://www.fearlessfutures.org/articles-cpt/dei-disrupted-the-blueprint-for-dei-worth-doing/">DEI Disrupted: The Blueprint for DEI Worth Doing</a></li><li><a href="https://en.wikipedia.org/wiki/Combahee_River_Collective">Combahee River Collective</a></li></ul><p>Rubie Eílis Clarke (she/her) is Senior Director of Consultancy, Fearless Futures. Rubie is of Jewish and Irish heritage and is based in her home town of London. As Senior Director of Consultancy at Fearless Futures, Rubie supports ambitious organisations to diagnose inequity in their ecosystems and design, implement and evaluate innovative anti-oppression solutions. Her expertise lies in critical social theory and research, policy analysis and organisational change strategy. She holds a B.A. in Sociology and Anthropology from Goldsmiths University, London and a M.A. in Global Political Economy from the University of Sussex, with a focus on social and economic policy, Race critical theory, decoloniality and intersectional feminism. Rubie is also an expert facilitator who is skilled at leaning into nuance, complexity and discomfort with curiosity and compassion. She is passionate about facilitating collaborative learning journeys that build deep understanding of the root causes of oppression and unlock innovative and meaningful ways to disrupt and divest in service, ultimately, of collective liberation.</p><p>Hanna Naima Mccloskey (she/her) is Founder and CEO, Fearless Futures. Hanna is Algerian British and the Founder &amp; CEO of Fearless Futures. Before founding Fearless Futures she worked for the UN, NGOs and the Royal Bank of Scotland, across communications, research and finance roles; and has lived, studied and worked in Israel-Palestine, Italy, USA, Sudan, Syria and the UK. She has a BA in English from the University of Cambridge and an MA in International Relations from the Johns Hopkins School of Advanced International Studies, with a specialism in Conflict Management. Hanna is passionate, compassionate and challenging as an educator and combines this with rigour and creativity in consultancy. She brings nuanced and complex ideas in incisive and engaging ways to all she supports, always with a commitment for equitable transformation. Hanna is also a qualified ABM bodyfeeding peer supporter, committed to enabling all parents to meet their body feeding goals.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week Alix continues her conversation with Hanna McCloskey and Rubie Clarke from Fearless Futures and we take a whistle-stop tour of the past 5 years. We start in 2020 with the disingenuous but huge embrace of DEI work by tech companies, to 2025 when those same companies are part of massive movements actively campaigning against it.</p><p>The pair share what it was like running a DEI consultancy in the months and years following the murder of George Floyd — when DEI was suddenly on the agenda for a lot organisations. The performative and ineffective methods that DEI is famous for (endless canape receptions!) has also given the inevitable backlash easy pickings for mockery and vilification.</p><p>The news is happening so fast, but these DEI episodes can hopefully help listeners better understand the backlash, not just to DEI, but to any attempts to correct systemic inequity in society.</p><p><a href="https://www.saysmaybe.com/newsletter"><strong>Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</strong></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.fearlessfutures.org/">Fearless Futures</a></li><li><a href="https://www.fearlessfutures.org/articles-cpt/dei-disrupted-the-blueprint-for-dei-worth-doing/">DEI Disrupted: The Blueprint for DEI Worth Doing</a></li><li><a href="https://en.wikipedia.org/wiki/Combahee_River_Collective">Combahee River Collective</a></li></ul><p>Rubie Eílis Clarke (she/her) is Senior Director of Consultancy, Fearless Futures. Rubie is of Jewish and Irish heritage and is based in her home town of London. As Senior Director of Consultancy at Fearless Futures, Rubie supports ambitious organisations to diagnose inequity in their ecosystems and design, implement and evaluate innovative anti-oppression solutions. Her expertise lies in critical social theory and research, policy analysis and organisational change strategy. She holds a B.A. in Sociology and Anthropology from Goldsmiths University, London and a M.A. in Global Political Economy from the University of Sussex, with a focus on social and economic policy, Race critical theory, decoloniality and intersectional feminism. Rubie is also an expert facilitator who is skilled at leaning into nuance, complexity and discomfort with curiosity and compassion. She is passionate about facilitating collaborative learning journeys that build deep understanding of the root causes of oppression and unlock innovative and meaningful ways to disrupt and divest in service, ultimately, of collective liberation.</p><p>Hanna Naima Mccloskey (she/her) is Founder and CEO, Fearless Futures. Hanna is Algerian British and the Founder &amp; CEO of Fearless Futures. Before founding Fearless Futures she worked for the UN, NGOs and the Royal Bank of Scotland, across communications, research and finance roles; and has lived, studied and worked in Israel-Palestine, Italy, USA, Sudan, Syria and the UK. She has a BA in English from the University of Cambridge and an MA in International Relations from the Johns Hopkins School of Advanced International Studies, with a specialism in Conflict Management. Hanna is passionate, compassionate and challenging as an educator and combines this with rigour and creativity in consultancy. She brings nuanced and complex ideas in incisive and engaging ways to all she supports, always with a commitment for equitable transformation. Hanna is also a qualified ABM bodyfeeding peer supporter, committed to enabling all parents to meet their body feeding goals.</p>]]>
      </content:encoded>
      <pubDate>Fri, 31 Jan 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/5f219b3a/bb611660.mp3" length="64648536" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2691</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week Alix continues her conversation with Hanna McCloskey and Rubie Clarke from Fearless Futures and we take a whistle-stop tour of the past 5 years. We start in 2020 with the disingenuous but huge embrace of DEI work by tech companies, to 2025 when those same companies are part of massive movements actively campaigning against it.</p><p>The pair share what it was like running a DEI consultancy in the months and years following the murder of George Floyd — when DEI was suddenly on the agenda for a lot organisations. The performative and ineffective methods that DEI is famous for (endless canape receptions!) has also given the inevitable backlash easy pickings for mockery and vilification.</p><p>The news is happening so fast, but these DEI episodes can hopefully help listeners better understand the backlash, not just to DEI, but to any attempts to correct systemic inequity in society.</p><p><a href="https://www.saysmaybe.com/newsletter"><strong>Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</strong></p><p><strong>Further reading &amp; resources:</strong></p><ul><li><a href="https://www.fearlessfutures.org/">Fearless Futures</a></li><li><a href="https://www.fearlessfutures.org/articles-cpt/dei-disrupted-the-blueprint-for-dei-worth-doing/">DEI Disrupted: The Blueprint for DEI Worth Doing</a></li><li><a href="https://en.wikipedia.org/wiki/Combahee_River_Collective">Combahee River Collective</a></li></ul><p>Rubie Eílis Clarke (she/her) is Senior Director of Consultancy, Fearless Futures. Rubie is of Jewish and Irish heritage and is based in her home town of London. As Senior Director of Consultancy at Fearless Futures, Rubie supports ambitious organisations to diagnose inequity in their ecosystems and design, implement and evaluate innovative anti-oppression solutions. Her expertise lies in critical social theory and research, policy analysis and organisational change strategy. She holds a B.A. in Sociology and Anthropology from Goldsmiths University, London and a M.A. in Global Political Economy from the University of Sussex, with a focus on social and economic policy, Race critical theory, decoloniality and intersectional feminism. Rubie is also an expert facilitator who is skilled at leaning into nuance, complexity and discomfort with curiosity and compassion. She is passionate about facilitating collaborative learning journeys that build deep understanding of the root causes of oppression and unlock innovative and meaningful ways to disrupt and divest in service, ultimately, of collective liberation.</p><p>Hanna Naima Mccloskey (she/her) is Founder and CEO, Fearless Futures. Hanna is Algerian British and the Founder &amp; CEO of Fearless Futures. Before founding Fearless Futures she worked for the UN, NGOs and the Royal Bank of Scotland, across communications, research and finance roles; and has lived, studied and worked in Israel-Palestine, Italy, USA, Sudan, Syria and the UK. She has a BA in English from the University of Cambridge and an MA in International Relations from the Johns Hopkins School of Advanced International Studies, with a specialism in Conflict Management. Hanna is passionate, compassionate and challenging as an educator and combines this with rigour and creativity in consultancy. She brings nuanced and complex ideas in incisive and engaging ways to all she supports, always with a commitment for equitable transformation. Hanna is also a qualified ABM bodyfeeding peer supporter, committed to enabling all parents to meet their body feeding goals.</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>DEI Season Finale: Part One</title>
      <itunes:episode>35</itunes:episode>
      <podcast:episode>35</podcast:episode>
      <itunes:title>DEI Season Finale: Part One</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d226f9da-8311-402a-9d65-d8c84ef40511</guid>
      <link>https://share.transistor.fm/s/a02a2ff5</link>
      <description>
        <![CDATA[<p>DEI is a nebulous field — if you’re not in it, it can be hard to know which tactics and methods are reasonable and effective… and which are a total waste of time. Or worse: which are actively harmful.</p><p>In this two-parter Alix is joined by Hanna McCloskey and Rubie Clarke from Fearless Futures. In this episode they share what DEI is and crucially, what it isn’t.</p><p>Listen to understand why unconscious bias training is a waste of time, and what meaningful anti-oppression work actually looks like — especially when attempting to embed these principles into digital products that are deployed globally.</p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Further reading &amp; resources:</p><ul><li><a href="https://www.fearlessfutures.org/">Fearless Futures</a></li><li><a href="https://www.fearlessfutures.org/articles-cpt/dei-disrupted-the-blueprint-for-dei-worth-doing/">DEI Disrupted: The Blueprint for DEI Worth Doing</a></li><li><a href="https://en.wikipedia.org/wiki/Combahee_River_Collective">Combahee River Collective</a></li></ul><p>Rubie Eílis Clarke (she/her) is Senior Director of Consultancy, Fearless Futures. Rubie is of Jewish and Irish heritage and is based in her home town of London. As Senior Director of Consultancy at Fearless Futures, Rubie supports ambitious organisations to diagnose inequity in their ecosystems and design, implement and evaluate innovative anti-oppression solutions. Her expertise lies in critical social theory and research, policy analysis and organisational change strategy. She holds a B.A. in Sociology and Anthropology from Goldsmiths University, London and a M.A. in Global Political Economy from the University of Sussex, with a focus on social and economic policy, Race critical theory, decoloniality and intersectional feminism. Rubie is also an expert facilitator who is skilled at leaning into nuance, complexity and discomfort with curiosity and compassion. She is passionate about facilitating collaborative learning journeys that build deep understanding of the root causes of oppression and unlock innovative and meaningful ways to disrupt and divest in service, ultimately, of collective liberation.</p><p>Hanna Naima Mccloskey (she/her) is Founder and CEO, Fearless Futures. Hanna is Algerian British and the Founder &amp; CEO of Fearless Futures. Before founding Fearless Futures she worked for the UN, NGOs and the Royal Bank of Scotland, across communications, research and finance roles; and has lived, studied and worked in Israel-Palestine, Italy, USA, Sudan, Syria and the UK. She has a BA in English from the University of Cambridge and an MA in International Relations from the Johns Hopkins School of Advanced International Studies, with a specialism in Conflict Management. Hanna is passionate, compassionate and challenging as an educator and combines this with rigour and creativity in consultancy. She brings nuanced and complex ideas in incisive and engaging ways to all she supports, always with a commitment for equitable transformation. Hanna is also a qualified ABM bodyfeeding peer supporter, committed to enabling all parents to meet their body feeding goals.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>DEI is a nebulous field — if you’re not in it, it can be hard to know which tactics and methods are reasonable and effective… and which are a total waste of time. Or worse: which are actively harmful.</p><p>In this two-parter Alix is joined by Hanna McCloskey and Rubie Clarke from Fearless Futures. In this episode they share what DEI is and crucially, what it isn’t.</p><p>Listen to understand why unconscious bias training is a waste of time, and what meaningful anti-oppression work actually looks like — especially when attempting to embed these principles into digital products that are deployed globally.</p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Further reading &amp; resources:</p><ul><li><a href="https://www.fearlessfutures.org/">Fearless Futures</a></li><li><a href="https://www.fearlessfutures.org/articles-cpt/dei-disrupted-the-blueprint-for-dei-worth-doing/">DEI Disrupted: The Blueprint for DEI Worth Doing</a></li><li><a href="https://en.wikipedia.org/wiki/Combahee_River_Collective">Combahee River Collective</a></li></ul><p>Rubie Eílis Clarke (she/her) is Senior Director of Consultancy, Fearless Futures. Rubie is of Jewish and Irish heritage and is based in her home town of London. As Senior Director of Consultancy at Fearless Futures, Rubie supports ambitious organisations to diagnose inequity in their ecosystems and design, implement and evaluate innovative anti-oppression solutions. Her expertise lies in critical social theory and research, policy analysis and organisational change strategy. She holds a B.A. in Sociology and Anthropology from Goldsmiths University, London and a M.A. in Global Political Economy from the University of Sussex, with a focus on social and economic policy, Race critical theory, decoloniality and intersectional feminism. Rubie is also an expert facilitator who is skilled at leaning into nuance, complexity and discomfort with curiosity and compassion. She is passionate about facilitating collaborative learning journeys that build deep understanding of the root causes of oppression and unlock innovative and meaningful ways to disrupt and divest in service, ultimately, of collective liberation.</p><p>Hanna Naima Mccloskey (she/her) is Founder and CEO, Fearless Futures. Hanna is Algerian British and the Founder &amp; CEO of Fearless Futures. Before founding Fearless Futures she worked for the UN, NGOs and the Royal Bank of Scotland, across communications, research and finance roles; and has lived, studied and worked in Israel-Palestine, Italy, USA, Sudan, Syria and the UK. She has a BA in English from the University of Cambridge and an MA in International Relations from the Johns Hopkins School of Advanced International Studies, with a specialism in Conflict Management. Hanna is passionate, compassionate and challenging as an educator and combines this with rigour and creativity in consultancy. She brings nuanced and complex ideas in incisive and engaging ways to all she supports, always with a commitment for equitable transformation. Hanna is also a qualified ABM bodyfeeding peer supporter, committed to enabling all parents to meet their body feeding goals.</p>]]>
      </content:encoded>
      <pubDate>Fri, 24 Jan 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/a02a2ff5/3f77b361.mp3" length="67517976" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2811</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>DEI is a nebulous field — if you’re not in it, it can be hard to know which tactics and methods are reasonable and effective… and which are a total waste of time. Or worse: which are actively harmful.</p><p>In this two-parter Alix is joined by Hanna McCloskey and Rubie Clarke from Fearless Futures. In this episode they share what DEI is and crucially, what it isn’t.</p><p>Listen to understand why unconscious bias training is a waste of time, and what meaningful anti-oppression work actually looks like — especially when attempting to embed these principles into digital products that are deployed globally.</p><p><a href="https://www.saysmaybe.com/newsletter">**Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p><p>Further reading &amp; resources:</p><ul><li><a href="https://www.fearlessfutures.org/">Fearless Futures</a></li><li><a href="https://www.fearlessfutures.org/articles-cpt/dei-disrupted-the-blueprint-for-dei-worth-doing/">DEI Disrupted: The Blueprint for DEI Worth Doing</a></li><li><a href="https://en.wikipedia.org/wiki/Combahee_River_Collective">Combahee River Collective</a></li></ul><p>Rubie Eílis Clarke (she/her) is Senior Director of Consultancy, Fearless Futures. Rubie is of Jewish and Irish heritage and is based in her home town of London. As Senior Director of Consultancy at Fearless Futures, Rubie supports ambitious organisations to diagnose inequity in their ecosystems and design, implement and evaluate innovative anti-oppression solutions. Her expertise lies in critical social theory and research, policy analysis and organisational change strategy. She holds a B.A. in Sociology and Anthropology from Goldsmiths University, London and a M.A. in Global Political Economy from the University of Sussex, with a focus on social and economic policy, Race critical theory, decoloniality and intersectional feminism. Rubie is also an expert facilitator who is skilled at leaning into nuance, complexity and discomfort with curiosity and compassion. She is passionate about facilitating collaborative learning journeys that build deep understanding of the root causes of oppression and unlock innovative and meaningful ways to disrupt and divest in service, ultimately, of collective liberation.</p><p>Hanna Naima Mccloskey (she/her) is Founder and CEO, Fearless Futures. Hanna is Algerian British and the Founder &amp; CEO of Fearless Futures. Before founding Fearless Futures she worked for the UN, NGOs and the Royal Bank of Scotland, across communications, research and finance roles; and has lived, studied and worked in Israel-Palestine, Italy, USA, Sudan, Syria and the UK. She has a BA in English from the University of Cambridge and an MA in International Relations from the Johns Hopkins School of Advanced International Studies, with a specialism in Conflict Management. Hanna is passionate, compassionate and challenging as an educator and combines this with rigour and creativity in consultancy. She brings nuanced and complex ideas in incisive and engaging ways to all she supports, always with a commitment for equitable transformation. Hanna is also a qualified ABM bodyfeeding peer supporter, committed to enabling all parents to meet their body feeding goals.</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>DEI: the final season + Alex Kotran on the Future of Education</title>
      <itunes:episode>34</itunes:episode>
      <podcast:episode>34</podcast:episode>
      <itunes:title>DEI: the final season + Alex Kotran on the Future of Education</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7ad1b425-3f0b-4583-b272-f15dc8320a5b</guid>
      <link>https://share.transistor.fm/s/95bb33d7</link>
      <description>
        <![CDATA[<p>We have a special episode for you this week: we brought in Hanna Mccloskey and Rubie Clarke from <a href="https://www.fearlessfutures.org/">Fearless Futures</a> to talk about the recent announcement from Mark Zuckerberg which signalled, very strongly, that he doesn’t care about marginalised groups on his platforms — or within the company itself.</p><p>We hear from Rubie and Hanna in the first half of the episode — and they will be back with us over the next couple of weeks for a two-parter on DEI! The rest of the episode will feature Alex Kotran discussing the future of Education.</p><p>What does the term ‘AI literacy’ invoke for you? A proficiency in AI tooling? For Alex Kotran, founder of The AI Education Project, it’s about preparing students to enter a rapidly changing workforce. It’s not about just learning how to use AI, but understanding how to build durable skills around it, and get on a career path that won’t disappear in five years.</p><p>Alex has some great perspectives on how AI tools will significantly narrow career paths for young people. This is an urgent issue that spans beyond basic AI literacy. It's about preparing students for a workforce that might look very different in five years to what it does today, and thinking holistically about how issues of tech procurement and efficiency intersect with times of economic downturn, such as a recession.</p><p>Further Reading:</p><ul><li><a href="https://www.aiedu.org/">The AI Education Project</a></li><li>The AIEDU’s <a href="https://www.aiedu.org/ai-readiness-framework">AI Readiness Framework</a></li></ul><p>Alex Kotran, CEO of The AI Education Project (aiEDU), has nearly a decade of AI expertise and more than a decade of political experience, as a community organizer. He founded aiEDU in 2019 after he discovered that the Akron Public Schools, where his mom has taught for 30+ years, did not offer courses in AI use.</p><p>Previously, as Director of AI Ethics at H5, Alex partnered with NYU Law School and the National Judicial College to create a judicial training program that is now used around the world. He also established H5's first CSR function, incubating nonprofits like The Future Society, a leading AI governance institute.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>We have a special episode for you this week: we brought in Hanna Mccloskey and Rubie Clarke from <a href="https://www.fearlessfutures.org/">Fearless Futures</a> to talk about the recent announcement from Mark Zuckerberg which signalled, very strongly, that he doesn’t care about marginalised groups on his platforms — or within the company itself.</p><p>We hear from Rubie and Hanna in the first half of the episode — and they will be back with us over the next couple of weeks for a two-parter on DEI! The rest of the episode will feature Alex Kotran discussing the future of Education.</p><p>What does the term ‘AI literacy’ invoke for you? A proficiency in AI tooling? For Alex Kotran, founder of The AI Education Project, it’s about preparing students to enter a rapidly changing workforce. It’s not about just learning how to use AI, but understanding how to build durable skills around it, and get on a career path that won’t disappear in five years.</p><p>Alex has some great perspectives on how AI tools will significantly narrow career paths for young people. This is an urgent issue that spans beyond basic AI literacy. It's about preparing students for a workforce that might look very different in five years to what it does today, and thinking holistically about how issues of tech procurement and efficiency intersect with times of economic downturn, such as a recession.</p><p>Further Reading:</p><ul><li><a href="https://www.aiedu.org/">The AI Education Project</a></li><li>The AIEDU’s <a href="https://www.aiedu.org/ai-readiness-framework">AI Readiness Framework</a></li></ul><p>Alex Kotran, CEO of The AI Education Project (aiEDU), has nearly a decade of AI expertise and more than a decade of political experience, as a community organizer. He founded aiEDU in 2019 after he discovered that the Akron Public Schools, where his mom has taught for 30+ years, did not offer courses in AI use.</p><p>Previously, as Director of AI Ethics at H5, Alex partnered with NYU Law School and the National Judicial College to create a judicial training program that is now used around the world. He also established H5's first CSR function, incubating nonprofits like The Future Society, a leading AI governance institute.</p>]]>
      </content:encoded>
      <pubDate>Fri, 17 Jan 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/95bb33d7/66289988.mp3" length="76980239" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3205</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>We have a special episode for you this week: we brought in Hanna Mccloskey and Rubie Clarke from <a href="https://www.fearlessfutures.org/">Fearless Futures</a> to talk about the recent announcement from Mark Zuckerberg which signalled, very strongly, that he doesn’t care about marginalised groups on his platforms — or within the company itself.</p><p>We hear from Rubie and Hanna in the first half of the episode — and they will be back with us over the next couple of weeks for a two-parter on DEI! The rest of the episode will feature Alex Kotran discussing the future of Education.</p><p>What does the term ‘AI literacy’ invoke for you? A proficiency in AI tooling? For Alex Kotran, founder of The AI Education Project, it’s about preparing students to enter a rapidly changing workforce. It’s not about just learning how to use AI, but understanding how to build durable skills around it, and get on a career path that won’t disappear in five years.</p><p>Alex has some great perspectives on how AI tools will significantly narrow career paths for young people. This is an urgent issue that spans beyond basic AI literacy. It's about preparing students for a workforce that might look very different in five years to what it does today, and thinking holistically about how issues of tech procurement and efficiency intersect with times of economic downturn, such as a recession.</p><p>Further Reading:</p><ul><li><a href="https://www.aiedu.org/">The AI Education Project</a></li><li>The AIEDU’s <a href="https://www.aiedu.org/ai-readiness-framework">AI Readiness Framework</a></li></ul><p>Alex Kotran, CEO of The AI Education Project (aiEDU), has nearly a decade of AI expertise and more than a decade of political experience, as a community organizer. He founded aiEDU in 2019 after he discovered that the Akron Public Schools, where his mom has taught for 30+ years, did not offer courses in AI use.</p><p>Previously, as Director of AI Ethics at H5, Alex partnered with NYU Law School and the National Judicial College to create a judicial training program that is now used around the world. He also established H5's first CSR function, incubating nonprofits like The Future Society, a leading AI governance institute.</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>To be Seen and not Watched w/ Tawana Petty</title>
      <itunes:episode>33</itunes:episode>
      <podcast:episode>33</podcast:episode>
      <itunes:title>To be Seen and not Watched w/ Tawana Petty</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a4603845-0d28-4c3a-8965-7d3ec818f00c</guid>
      <link>https://share.transistor.fm/s/39814cdd</link>
      <description>
        <![CDATA[<p>Welcome back! Let us know what you think of the show and what you want to see more of in 2025 by <a href="https://tally.so/r/3E860B">writing in here</a>, or rambling into a microphone <a href="https://www.videoask.com/fkzsby9q5">here</a>.</p><p>In this episode Alix is joined by Tawana Petty, who shares her experiences coming up as a political community activist in Detroit. Tawana studied the history of radical black movements under Grace Lee Boggs, and has taken these learnings into her work today.</p><p>Listen to learn about how places like Detroit are used as testing grounds for new ‘innovations’ — especially within marginalised neighbourhoods. Tawana explains in detail how surveillance and safety are often mistakenly conflated, and how we have to work to unlearn this conflation.</p><p>Further reading:</p><ul><li>Our Data Bodies project: <a href="https://www.odbproject.org/">https://www.odbproject.org/</a></li><li>James and Grace Lee Boggs Center: <a href="https://www.boggscenter.org/">https://www.boggscenter.org/</a></li><li>The Detroit Community and Technology Project: <a href="https://detroitcommunitytech.org/">https://detroitcommunitytech.org/</a> who ran the <a href="https://detroitcommunitytech.org/eii/ds">digital stewards program</a></li><li>Detroit Digital Justice Coalition: <a href="https://alliedmedia.org/projects/detroit-digital-justice-coalition">https://alliedmedia.org/projects/detroit-digital-justice-coalition</a></li><li>We The People of Detroit: <a href="https://www.wethepeopleofdetroit.com/">https://www.wethepeopleofdetroit.com/</a></li></ul><p>Tawana Petty is a mother, social justice organizer, poet, author, and facilitator. She is the founding Executive Director of Petty Propolis, Inc., an artist incubator which teaches poetry, policy literacy and advocacy, and interrogates negative pervasive narratives, in pursuit of racial and environmental justice. Petty is a 2023-2025 Just Tech Fellow with the Social Science Research Council, a 2024 Rockwood National LIO Alum, and she currently serves on the CS (computer science) for Detroit Steering Committee. In 2021, Petty was named one of 100 Brilliant Women in AI Ethics. In 2023, she was honored with the AI Policy Leader in Civil Society Award by the Center for AI and Digital Policy, the Ava Jo Silent Shero Award by the Michigan Roundtable for Diversity and Inclusion, and with a Racial Justice Leadership Award by the Detroit People's Platform. In 2024, Petty was listed on Business Insider’s AI Power List for Policy and Ethics.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Welcome back! Let us know what you think of the show and what you want to see more of in 2025 by <a href="https://tally.so/r/3E860B">writing in here</a>, or rambling into a microphone <a href="https://www.videoask.com/fkzsby9q5">here</a>.</p><p>In this episode Alix is joined by Tawana Petty, who shares her experiences coming up as a political community activist in Detroit. Tawana studied the history of radical black movements under Grace Lee Boggs, and has taken these learnings into her work today.</p><p>Listen to learn about how places like Detroit are used as testing grounds for new ‘innovations’ — especially within marginalised neighbourhoods. Tawana explains in detail how surveillance and safety are often mistakenly conflated, and how we have to work to unlearn this conflation.</p><p>Further reading:</p><ul><li>Our Data Bodies project: <a href="https://www.odbproject.org/">https://www.odbproject.org/</a></li><li>James and Grace Lee Boggs Center: <a href="https://www.boggscenter.org/">https://www.boggscenter.org/</a></li><li>The Detroit Community and Technology Project: <a href="https://detroitcommunitytech.org/">https://detroitcommunitytech.org/</a> who ran the <a href="https://detroitcommunitytech.org/eii/ds">digital stewards program</a></li><li>Detroit Digital Justice Coalition: <a href="https://alliedmedia.org/projects/detroit-digital-justice-coalition">https://alliedmedia.org/projects/detroit-digital-justice-coalition</a></li><li>We The People of Detroit: <a href="https://www.wethepeopleofdetroit.com/">https://www.wethepeopleofdetroit.com/</a></li></ul><p>Tawana Petty is a mother, social justice organizer, poet, author, and facilitator. She is the founding Executive Director of Petty Propolis, Inc., an artist incubator which teaches poetry, policy literacy and advocacy, and interrogates negative pervasive narratives, in pursuit of racial and environmental justice. Petty is a 2023-2025 Just Tech Fellow with the Social Science Research Council, a 2024 Rockwood National LIO Alum, and she currently serves on the CS (computer science) for Detroit Steering Committee. In 2021, Petty was named one of 100 Brilliant Women in AI Ethics. In 2023, she was honored with the AI Policy Leader in Civil Society Award by the Center for AI and Digital Policy, the Ava Jo Silent Shero Award by the Michigan Roundtable for Diversity and Inclusion, and with a Racial Justice Leadership Award by the Detroit People's Platform. In 2024, Petty was listed on Business Insider’s AI Power List for Policy and Ethics.</p>]]>
      </content:encoded>
      <pubDate>Fri, 10 Jan 2025 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/39814cdd/2d6711d4.mp3" length="65804917" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2739</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Welcome back! Let us know what you think of the show and what you want to see more of in 2025 by <a href="https://tally.so/r/3E860B">writing in here</a>, or rambling into a microphone <a href="https://www.videoask.com/fkzsby9q5">here</a>.</p><p>In this episode Alix is joined by Tawana Petty, who shares her experiences coming up as a political community activist in Detroit. Tawana studied the history of radical black movements under Grace Lee Boggs, and has taken these learnings into her work today.</p><p>Listen to learn about how places like Detroit are used as testing grounds for new ‘innovations’ — especially within marginalised neighbourhoods. Tawana explains in detail how surveillance and safety are often mistakenly conflated, and how we have to work to unlearn this conflation.</p><p>Further reading:</p><ul><li>Our Data Bodies project: <a href="https://www.odbproject.org/">https://www.odbproject.org/</a></li><li>James and Grace Lee Boggs Center: <a href="https://www.boggscenter.org/">https://www.boggscenter.org/</a></li><li>The Detroit Community and Technology Project: <a href="https://detroitcommunitytech.org/">https://detroitcommunitytech.org/</a> who ran the <a href="https://detroitcommunitytech.org/eii/ds">digital stewards program</a></li><li>Detroit Digital Justice Coalition: <a href="https://alliedmedia.org/projects/detroit-digital-justice-coalition">https://alliedmedia.org/projects/detroit-digital-justice-coalition</a></li><li>We The People of Detroit: <a href="https://www.wethepeopleofdetroit.com/">https://www.wethepeopleofdetroit.com/</a></li></ul><p>Tawana Petty is a mother, social justice organizer, poet, author, and facilitator. She is the founding Executive Director of Petty Propolis, Inc., an artist incubator which teaches poetry, policy literacy and advocacy, and interrogates negative pervasive narratives, in pursuit of racial and environmental justice. Petty is a 2023-2025 Just Tech Fellow with the Social Science Research Council, a 2024 Rockwood National LIO Alum, and she currently serves on the CS (computer science) for Detroit Steering Committee. In 2021, Petty was named one of 100 Brilliant Women in AI Ethics. In 2023, she was honored with the AI Policy Leader in Civil Society Award by the Center for AI and Digital Policy, the Ava Jo Silent Shero Award by the Michigan Roundtable for Diversity and Inclusion, and with a Racial Justice Leadership Award by the Detroit People's Platform. In 2024, Petty was listed on Business Insider’s AI Power List for Policy and Ethics.</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Our learnings from 2024</title>
      <itunes:episode>32</itunes:episode>
      <podcast:episode>32</podcast:episode>
      <itunes:title>Our learnings from 2024</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d1c2aaaf-42ba-4cf8-8f00-e37ce2829f6a</guid>
      <link>https://share.transistor.fm/s/b6483b75</link>
      <description>
        <![CDATA[<p>We’re wrapped for the year, and will be back on the 10th of Jan. In the meantime, listen to Alix, Prathm, and Georgia discuss their biggest learnings from the pod this year from some of their favourite episodes.</p><p><strong>**We want to hear from YOU about the podcast — what do you want to hear more of in 2025? Share your ideas with us here: </strong><a href="https://tally.so/r/3E860B**"><strong>https://tally.so/r/3E860B</strong></a><strong>**</strong></p><p><br></p><p><strong>Or if you’d rather ramble into a microphone (just like we do…) use </strong><a href="https://www.videoask.com/fkzsby9q5"><strong>this link</strong></a><strong> instead!<br></strong><br></p><p>We pull out clips from the following episodes:</p><ul><li><a href="https://www.saysmaybe.com/podcast/the-age-of-noise-w-eryk-salvaggio">The Age of Noise w/ Eryk Salvaggio</a></li><li><a href="https://www.saysmaybe.com/podcast/the-happy-few-open-source-ai-part-one">The Happy Few: Open Source AI pt1</a></li><li><a href="https://www.saysmaybe.com/podcast/net-0-big-dirty-data-centres">Big Dirty Data Centres w/ Boxi Wu and Jenna Ruddock</a></li><li><a href="https://www.saysmaybe.com/podcast/us-election-special-w-spencer-overton">US Election Special w/ Spencer Overton</a></li><li><a href="https://www.saysmaybe.com/podcast/chasing-away-sidewalk-labs-w-bianca-wylie">Chasing Away Sidewalk Labs w/ Bianca Wylie</a></li><li><a href="https://www.saysmaybe.com/podcast/the-human-in-the-loop">The Human in the Loop</a></li><li><a href="https://www.saysmaybe.com/podcast/the-stories-we-tell-ourselves-about-ai">The Stories we Tell Ourselves About AI</a></li></ul><p><strong>Further reading:</strong></p><ul><li>Learn more about what ex TikTok moderator Mojez has been up to this year via this <a href="https://www.tiktok.com/@bbcnews/video/7435963053788646688?_r=1&amp;_t=8rJKxPYz4wj">BBC TikTok</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>We’re wrapped for the year, and will be back on the 10th of Jan. In the meantime, listen to Alix, Prathm, and Georgia discuss their biggest learnings from the pod this year from some of their favourite episodes.</p><p><strong>**We want to hear from YOU about the podcast — what do you want to hear more of in 2025? Share your ideas with us here: </strong><a href="https://tally.so/r/3E860B**"><strong>https://tally.so/r/3E860B</strong></a><strong>**</strong></p><p><br></p><p><strong>Or if you’d rather ramble into a microphone (just like we do…) use </strong><a href="https://www.videoask.com/fkzsby9q5"><strong>this link</strong></a><strong> instead!<br></strong><br></p><p>We pull out clips from the following episodes:</p><ul><li><a href="https://www.saysmaybe.com/podcast/the-age-of-noise-w-eryk-salvaggio">The Age of Noise w/ Eryk Salvaggio</a></li><li><a href="https://www.saysmaybe.com/podcast/the-happy-few-open-source-ai-part-one">The Happy Few: Open Source AI pt1</a></li><li><a href="https://www.saysmaybe.com/podcast/net-0-big-dirty-data-centres">Big Dirty Data Centres w/ Boxi Wu and Jenna Ruddock</a></li><li><a href="https://www.saysmaybe.com/podcast/us-election-special-w-spencer-overton">US Election Special w/ Spencer Overton</a></li><li><a href="https://www.saysmaybe.com/podcast/chasing-away-sidewalk-labs-w-bianca-wylie">Chasing Away Sidewalk Labs w/ Bianca Wylie</a></li><li><a href="https://www.saysmaybe.com/podcast/the-human-in-the-loop">The Human in the Loop</a></li><li><a href="https://www.saysmaybe.com/podcast/the-stories-we-tell-ourselves-about-ai">The Stories we Tell Ourselves About AI</a></li></ul><p><strong>Further reading:</strong></p><ul><li>Learn more about what ex TikTok moderator Mojez has been up to this year via this <a href="https://www.tiktok.com/@bbcnews/video/7435963053788646688?_r=1&amp;_t=8rJKxPYz4wj">BBC TikTok</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 20 Dec 2024 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/b6483b75/0456bb27.mp3" length="70416313" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2932</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>We’re wrapped for the year, and will be back on the 10th of Jan. In the meantime, listen to Alix, Prathm, and Georgia discuss their biggest learnings from the pod this year from some of their favourite episodes.</p><p><strong>**We want to hear from YOU about the podcast — what do you want to hear more of in 2025? Share your ideas with us here: </strong><a href="https://tally.so/r/3E860B**"><strong>https://tally.so/r/3E860B</strong></a><strong>**</strong></p><p><br></p><p><strong>Or if you’d rather ramble into a microphone (just like we do…) use </strong><a href="https://www.videoask.com/fkzsby9q5"><strong>this link</strong></a><strong> instead!<br></strong><br></p><p>We pull out clips from the following episodes:</p><ul><li><a href="https://www.saysmaybe.com/podcast/the-age-of-noise-w-eryk-salvaggio">The Age of Noise w/ Eryk Salvaggio</a></li><li><a href="https://www.saysmaybe.com/podcast/the-happy-few-open-source-ai-part-one">The Happy Few: Open Source AI pt1</a></li><li><a href="https://www.saysmaybe.com/podcast/net-0-big-dirty-data-centres">Big Dirty Data Centres w/ Boxi Wu and Jenna Ruddock</a></li><li><a href="https://www.saysmaybe.com/podcast/us-election-special-w-spencer-overton">US Election Special w/ Spencer Overton</a></li><li><a href="https://www.saysmaybe.com/podcast/chasing-away-sidewalk-labs-w-bianca-wylie">Chasing Away Sidewalk Labs w/ Bianca Wylie</a></li><li><a href="https://www.saysmaybe.com/podcast/the-human-in-the-loop">The Human in the Loop</a></li><li><a href="https://www.saysmaybe.com/podcast/the-stories-we-tell-ourselves-about-ai">The Stories we Tell Ourselves About AI</a></li></ul><p><strong>Further reading:</strong></p><ul><li>Learn more about what ex TikTok moderator Mojez has been up to this year via this <a href="https://www.tiktok.com/@bbcnews/video/7435963053788646688?_r=1&amp;_t=8rJKxPYz4wj">BBC TikTok</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>A $20bn Search Engine w/ Michelle Meagher</title>
      <itunes:episode>31</itunes:episode>
      <podcast:episode>31</podcast:episode>
      <itunes:title>A $20bn Search Engine w/ Michelle Meagher</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">df9d5018-468d-4f6f-af93-b68eae6a25ce</guid>
      <link>https://share.transistor.fm/s/7b4aa704</link>
      <description>
        <![CDATA[<p>Google has finally been judged to be a monopoly by a federal court — while this was strikingly obvious already, what does this judgement mean? Is this too little too late?</p><p>This week Alix and Prathm were joined by Michelle Meagher, an antitrust lawyer who shared a brief history of how antitrust started as a tool for governments to stop the consolidation of corporate power, and over time has morphed to focus on issues of competition and consumer protection — which has allowed monopolies to thrive.</p><p>Michelle discusses the details and her thinking on the ongoing cases against Google, and more generally on how monopolies are basically like a big octopus arm-wrestling itself.</p><p>Further reading:</p><ul><li><a href="https://www.nytimes.com/2024/08/13/technology/google-monopoly-antitrust-justice-department.html">US Said to Consider a Breakup of Google to Address Search Monopoly</a> — NY Times</li><li><a href="https://www.theguardian.com/technology/article/2024/sep/09/google-antitrust-lawsuit-online-ads">Google’s second antitrust suit brought by US begins, over online ads</a> — Guardian</li><li><a href="https://www.bigtechontrial.com/">Big Tech on Trial</a> — Matt Stoller</li><li><a href="https://www.theverge.com/24040543/eu-dma-digital-markets-act-big-tech-antitrust/archives/2">How the EU’s DMA is changing Big Tech</a> — The Verge</li><li><a href="https://www.theguardian.com/business/2023/sep/22/microsofts-activision-blizzard-deal-set-to-be-cleared-by-uk-regulator">UK set to clear Microsoft’s deal to buy Call of Duty maker Activision Blizzard</a> — Guardian</li></ul><p><strong>Sign up to the </strong><a href="https://www.saysmaybe.com/newsletter"><strong>Computer Says Maybe newsletter</strong></a><strong> to get invites to our events and receive other juicy resources straight to your inbox</strong></p><p><br>Michelle is a competition lawyer and co-founder of the Balanced Economy Project, Europe’s first anti-monopoly organisation. She is author of <a href="https://www.penguin.co.uk/books/315772/competition-is-killing-us/9780241423011.html"><em>Competition is Killing Us: How Big Business is Harming Our Society and Planet - and What to Do About It</em></a> (Penguin, 2020), a Financial Times Best Economics Book of the Year. She is a Senior Policy Fellow at the University College London Centre for Law, Economics and Society. She is a Senior Fellow working on Monopoly and Corporate Governance at the Centre for Research on Multinational Corporations (SOMO).</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Google has finally been judged to be a monopoly by a federal court — while this was strikingly obvious already, what does this judgement mean? Is this too little too late?</p><p>This week Alix and Prathm were joined by Michelle Meagher, an antitrust lawyer who shared a brief history of how antitrust started as a tool for governments to stop the consolidation of corporate power, and over time has morphed to focus on issues of competition and consumer protection — which has allowed monopolies to thrive.</p><p>Michelle discusses the details and her thinking on the ongoing cases against Google, and more generally on how monopolies are basically like a big octopus arm-wrestling itself.</p><p>Further reading:</p><ul><li><a href="https://www.nytimes.com/2024/08/13/technology/google-monopoly-antitrust-justice-department.html">US Said to Consider a Breakup of Google to Address Search Monopoly</a> — NY Times</li><li><a href="https://www.theguardian.com/technology/article/2024/sep/09/google-antitrust-lawsuit-online-ads">Google’s second antitrust suit brought by US begins, over online ads</a> — Guardian</li><li><a href="https://www.bigtechontrial.com/">Big Tech on Trial</a> — Matt Stoller</li><li><a href="https://www.theverge.com/24040543/eu-dma-digital-markets-act-big-tech-antitrust/archives/2">How the EU’s DMA is changing Big Tech</a> — The Verge</li><li><a href="https://www.theguardian.com/business/2023/sep/22/microsofts-activision-blizzard-deal-set-to-be-cleared-by-uk-regulator">UK set to clear Microsoft’s deal to buy Call of Duty maker Activision Blizzard</a> — Guardian</li></ul><p><strong>Sign up to the </strong><a href="https://www.saysmaybe.com/newsletter"><strong>Computer Says Maybe newsletter</strong></a><strong> to get invites to our events and receive other juicy resources straight to your inbox</strong></p><p><br>Michelle is a competition lawyer and co-founder of the Balanced Economy Project, Europe’s first anti-monopoly organisation. She is author of <a href="https://www.penguin.co.uk/books/315772/competition-is-killing-us/9780241423011.html"><em>Competition is Killing Us: How Big Business is Harming Our Society and Planet - and What to Do About It</em></a> (Penguin, 2020), a Financial Times Best Economics Book of the Year. She is a Senior Policy Fellow at the University College London Centre for Law, Economics and Society. She is a Senior Fellow working on Monopoly and Corporate Governance at the Centre for Research on Multinational Corporations (SOMO).</p>]]>
      </content:encoded>
      <pubDate>Fri, 13 Dec 2024 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/7b4aa704/0abfd97e.mp3" length="66019437" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2748</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Google has finally been judged to be a monopoly by a federal court — while this was strikingly obvious already, what does this judgement mean? Is this too little too late?</p><p>This week Alix and Prathm were joined by Michelle Meagher, an antitrust lawyer who shared a brief history of how antitrust started as a tool for governments to stop the consolidation of corporate power, and over time has morphed to focus on issues of competition and consumer protection — which has allowed monopolies to thrive.</p><p>Michelle discusses the details and her thinking on the ongoing cases against Google, and more generally on how monopolies are basically like a big octopus arm-wrestling itself.</p><p>Further reading:</p><ul><li><a href="https://www.nytimes.com/2024/08/13/technology/google-monopoly-antitrust-justice-department.html">US Said to Consider a Breakup of Google to Address Search Monopoly</a> — NY Times</li><li><a href="https://www.theguardian.com/technology/article/2024/sep/09/google-antitrust-lawsuit-online-ads">Google’s second antitrust suit brought by US begins, over online ads</a> — Guardian</li><li><a href="https://www.bigtechontrial.com/">Big Tech on Trial</a> — Matt Stoller</li><li><a href="https://www.theverge.com/24040543/eu-dma-digital-markets-act-big-tech-antitrust/archives/2">How the EU’s DMA is changing Big Tech</a> — The Verge</li><li><a href="https://www.theguardian.com/business/2023/sep/22/microsofts-activision-blizzard-deal-set-to-be-cleared-by-uk-regulator">UK set to clear Microsoft’s deal to buy Call of Duty maker Activision Blizzard</a> — Guardian</li></ul><p><strong>Sign up to the </strong><a href="https://www.saysmaybe.com/newsletter"><strong>Computer Says Maybe newsletter</strong></a><strong> to get invites to our events and receive other juicy resources straight to your inbox</strong></p><p><br>Michelle is a competition lawyer and co-founder of the Balanced Economy Project, Europe’s first anti-monopoly organisation. She is author of <a href="https://www.penguin.co.uk/books/315772/competition-is-killing-us/9780241423011.html"><em>Competition is Killing Us: How Big Business is Harming Our Society and Planet - and What to Do About It</em></a> (Penguin, 2020), a Financial Times Best Economics Book of the Year. She is a Senior Policy Fellow at the University College London Centre for Law, Economics and Society. She is a Senior Fellow working on Monopoly and Corporate Governance at the Centre for Research on Multinational Corporations (SOMO).</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>The Age of Noise w/ Eryk Salvaggio</title>
      <itunes:episode>30</itunes:episode>
      <podcast:episode>30</podcast:episode>
      <itunes:title>The Age of Noise w/ Eryk Salvaggio</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9b068ca8-1025-40b2-bd34-e36ac5428491</guid>
      <link>https://share.transistor.fm/s/b6749d34</link>
      <description>
        <![CDATA[<p>What happens if you ask a generative AI image model to show you what Picasso’s work would have looked like if he lived in Japan in the 16th century? Would it produce something totally new, or just mash together stereotypical aesthetics from Picasso’s work, and 16th century Japan?</p><p>This week, Alix interviewed Eryk Salvaggio, who shares his ideas around how we are moving away from ‘the age of information’ and into an age of noise, where we’ve progressed so far into a paradigm of easy and frictionless information sharing, that information has transformed into an overwhelming wall of noise.</p><p>So if everything is just noise, what do we filter out and keep in — and what systems do we use to do that?</p><p>Further reading:</p><ul><li>Visit <a href="https://cyberneticforests.com/">Eryk’s Website</a></li><li><a href="https://mail.cyberneticforests.com/">Cybernetic Forests</a> — Eryk’s newsletter on tech and culture</li><li>Our upcoming event: <a href="https://lu.ma/wmnkwy2h">Insight Session: The politics, power, and responsibility of AI procurement with Bianca Wylie</a></li><li><a href="https://www.saysmaybe.com/newsletter">Our newsletter</a>, which shares invites to events like the above, and other interesting bits</li></ul><p>Eryk Salvaggio has been making tech-critical art since the dawn of the Internet. Now he’s a blend of artist, tech policy researcher, and writer focused on a critical approach to AI. He is the Emerging Technologies Research Advisor at the Siegel Family Endowment, an instructor in Responsible AI at Elisava Barcelona School of Design, a researcher at the metaLab (at) Harvard University’s AI Pedagogy Project, one of the top contributors to Tech Policy Press, and an artist whose work has been shown at festivals including SXSW, DEFCON, and Unsound.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>What happens if you ask a generative AI image model to show you what Picasso’s work would have looked like if he lived in Japan in the 16th century? Would it produce something totally new, or just mash together stereotypical aesthetics from Picasso’s work, and 16th century Japan?</p><p>This week, Alix interviewed Eryk Salvaggio, who shares his ideas around how we are moving away from ‘the age of information’ and into an age of noise, where we’ve progressed so far into a paradigm of easy and frictionless information sharing, that information has transformed into an overwhelming wall of noise.</p><p>So if everything is just noise, what do we filter out and keep in — and what systems do we use to do that?</p><p>Further reading:</p><ul><li>Visit <a href="https://cyberneticforests.com/">Eryk’s Website</a></li><li><a href="https://mail.cyberneticforests.com/">Cybernetic Forests</a> — Eryk’s newsletter on tech and culture</li><li>Our upcoming event: <a href="https://lu.ma/wmnkwy2h">Insight Session: The politics, power, and responsibility of AI procurement with Bianca Wylie</a></li><li><a href="https://www.saysmaybe.com/newsletter">Our newsletter</a>, which shares invites to events like the above, and other interesting bits</li></ul><p>Eryk Salvaggio has been making tech-critical art since the dawn of the Internet. Now he’s a blend of artist, tech policy researcher, and writer focused on a critical approach to AI. He is the Emerging Technologies Research Advisor at the Siegel Family Endowment, an instructor in Responsible AI at Elisava Barcelona School of Design, a researcher at the metaLab (at) Harvard University’s AI Pedagogy Project, one of the top contributors to Tech Policy Press, and an artist whose work has been shown at festivals including SXSW, DEFCON, and Unsound.</p>]]>
      </content:encoded>
      <pubDate>Fri, 06 Dec 2024 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/b6749d34/59a868c1.mp3" length="69047828" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2875</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>What happens if you ask a generative AI image model to show you what Picasso’s work would have looked like if he lived in Japan in the 16th century? Would it produce something totally new, or just mash together stereotypical aesthetics from Picasso’s work, and 16th century Japan?</p><p>This week, Alix interviewed Eryk Salvaggio, who shares his ideas around how we are moving away from ‘the age of information’ and into an age of noise, where we’ve progressed so far into a paradigm of easy and frictionless information sharing, that information has transformed into an overwhelming wall of noise.</p><p>So if everything is just noise, what do we filter out and keep in — and what systems do we use to do that?</p><p>Further reading:</p><ul><li>Visit <a href="https://cyberneticforests.com/">Eryk’s Website</a></li><li><a href="https://mail.cyberneticforests.com/">Cybernetic Forests</a> — Eryk’s newsletter on tech and culture</li><li>Our upcoming event: <a href="https://lu.ma/wmnkwy2h">Insight Session: The politics, power, and responsibility of AI procurement with Bianca Wylie</a></li><li><a href="https://www.saysmaybe.com/newsletter">Our newsletter</a>, which shares invites to events like the above, and other interesting bits</li></ul><p>Eryk Salvaggio has been making tech-critical art since the dawn of the Internet. Now he’s a blend of artist, tech policy researcher, and writer focused on a critical approach to AI. He is the Emerging Technologies Research Advisor at the Siegel Family Endowment, an instructor in Responsible AI at Elisava Barcelona School of Design, a researcher at the metaLab (at) Harvard University’s AI Pedagogy Project, one of the top contributors to Tech Policy Press, and an artist whose work has been shown at festivals including SXSW, DEFCON, and Unsound.</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>The Happy Few: Open Source AI (part two)</title>
      <itunes:episode>29</itunes:episode>
      <podcast:episode>29</podcast:episode>
      <itunes:title>The Happy Few: Open Source AI (part two)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">62d132c5-1c45-40db-9227-d789be6920dc</guid>
      <link>https://share.transistor.fm/s/a4d1c248</link>
      <description>
        <![CDATA[<p>In part two of our episode on open source AI, we delve deeper into we can use openness and participation for sustainable AI governance. It’s clear that everyone agrees that things like the proliferation of harmful content is a huge risk — but what we cannot seem to agree on is how to eliminate this risk.</p><p>Alix is joined again by <a href="https://en.wikipedia.org/wiki/Mark_Surman">Mark Surman</a>, and this time they both take a closer look at the work Audrey Tang did as Taiwan’s first digital minister, where she successfully built and implemented a participatory framework that allowed the people of Taiwan to directly inform AI policy.</p><p>We also hear more from Merouane Debbah, who built the first LLM trained in Arabic, and highlights the importance of developing AI systems that don’t follow rigid western benchmarks.</p><p>Mark Surman has spent three decades building a better internet, from the advent of the web to the rise of artificial intelligence. As President of Mozilla, a global nonprofit backed technology company that does everything from making Firefox to advocating for a more open, equitable internet, Mark’s current focus is ensuring the various Mozilla organizations work in concert to make trustworthy AI a reality. Mark led the creation of <a href="http://mozilla.ai/">Mozilla.ai</a> (a commercial AI R+D lab) and <a href="https://mozilla.vc/">Mozilla Ventures</a> (an impact venture fund with a strong focus on AI). Before joining Mozilla, Mark spent 15 years leading organizations and projects that promoted the use of the internet and open source as tools for social and economic development.</p><p>More about our guests:</p><p>Audrey Tang, Cyber Ambassador of Taiwan, served as Taiwan’s 1st digital minister (2016-2024) and the world’s 1st nonbinary cabinet minister. Tang played a crucial role in shaping g0v (gov-zero), one of the most prominent civic tech movements worldwide. In 2014, Tang helped broadcast the demands of Sunflower Movement activists, and worked to resolve conflicts during a three-week occupation of Taiwan’s legislature. Tang became a reverse mentor to the minister in charge of digital participation, before assuming the role in 2016 after the government changed hands. Tang helped develop participatory democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon.</p><p>Sayash Kapoor is a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is a coauthor of AI Snake Oil, a book that provides a critical analysis of artificial intelligence, separating the hype from the true advances. His research examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. He was included in TIME Magazine’s inaugural list of the 100 most influential people in AI.</p><p>Mérouane Debbah is a researcher, educator and technology entrepreneur. He has founded several public and industrial research centers, start-ups and held executive positions in ICT companies. He is professor at <a href="https://secureurl.ankabut.ac.ae/fmlurlsvc/?fewReq=:B:JVYyOT08Mi5+NTomOC5hbDU4OTI4OS57YW9maXx9em01MTBqaTFua25raWo+MT0+bDk8bDhsbWs+a2kwOjxqOWxtPm04ODgxPi58NTk/Ozo5Oj4xOzoueWFsNTxJQ0BnW0tpODk4Oj88JTxJQ0BnW0trODk4Oj88LnpreHw1ZW16Z31pZm0mbG1qamlgSGN9JmlrJmltLms1OzEuYGxkNTg=&amp;url=https%3a%2f%2fen.wikipedia.org%2fwiki%2fKhalifa_University">Khalifa University</a> in Abu Dhabi, and founding director of the Khalifa University 6G Research Center. He has been working at the interface of AI and telecommunication and pioneered in 2021 the development of NOOR, the first Arabic LLM.</p><p><strong>Further reading &amp; resources</strong></p><ul><li><a href="https://pol.is/home">Polis</a> — a real-time participation platform</li><li><a href="https://vtaiwan-openai-2023.vercel.app/Report_%20Recursive%20Public.pdf">Recursive Public</a> by vTaiwan</li><li><a href="https://noor.tii.ae/">Noor</a> — the first LLM trained on the Arabic language</li><li><a href="https://falconfoundation.ai/">Falcon Foundation</a></li><li><a href="https://www.aisnakeoil.com/p/ai-snake-oil-is-now-available-to">Buy AI Snake Oil</a> by Sayash Kapoor and Arvind Narayanan</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In part two of our episode on open source AI, we delve deeper into we can use openness and participation for sustainable AI governance. It’s clear that everyone agrees that things like the proliferation of harmful content is a huge risk — but what we cannot seem to agree on is how to eliminate this risk.</p><p>Alix is joined again by <a href="https://en.wikipedia.org/wiki/Mark_Surman">Mark Surman</a>, and this time they both take a closer look at the work Audrey Tang did as Taiwan’s first digital minister, where she successfully built and implemented a participatory framework that allowed the people of Taiwan to directly inform AI policy.</p><p>We also hear more from Merouane Debbah, who built the first LLM trained in Arabic, and highlights the importance of developing AI systems that don’t follow rigid western benchmarks.</p><p>Mark Surman has spent three decades building a better internet, from the advent of the web to the rise of artificial intelligence. As President of Mozilla, a global nonprofit backed technology company that does everything from making Firefox to advocating for a more open, equitable internet, Mark’s current focus is ensuring the various Mozilla organizations work in concert to make trustworthy AI a reality. Mark led the creation of <a href="http://mozilla.ai/">Mozilla.ai</a> (a commercial AI R+D lab) and <a href="https://mozilla.vc/">Mozilla Ventures</a> (an impact venture fund with a strong focus on AI). Before joining Mozilla, Mark spent 15 years leading organizations and projects that promoted the use of the internet and open source as tools for social and economic development.</p><p>More about our guests:</p><p>Audrey Tang, Cyber Ambassador of Taiwan, served as Taiwan’s 1st digital minister (2016-2024) and the world’s 1st nonbinary cabinet minister. Tang played a crucial role in shaping g0v (gov-zero), one of the most prominent civic tech movements worldwide. In 2014, Tang helped broadcast the demands of Sunflower Movement activists, and worked to resolve conflicts during a three-week occupation of Taiwan’s legislature. Tang became a reverse mentor to the minister in charge of digital participation, before assuming the role in 2016 after the government changed hands. Tang helped develop participatory democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon.</p><p>Sayash Kapoor is a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is a coauthor of AI Snake Oil, a book that provides a critical analysis of artificial intelligence, separating the hype from the true advances. His research examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. He was included in TIME Magazine’s inaugural list of the 100 most influential people in AI.</p><p>Mérouane Debbah is a researcher, educator and technology entrepreneur. He has founded several public and industrial research centers, start-ups and held executive positions in ICT companies. He is professor at <a href="https://secureurl.ankabut.ac.ae/fmlurlsvc/?fewReq=:B:JVYyOT08Mi5+NTomOC5hbDU4OTI4OS57YW9maXx9em01MTBqaTFua25raWo+MT0+bDk8bDhsbWs+a2kwOjxqOWxtPm04ODgxPi58NTk/Ozo5Oj4xOzoueWFsNTxJQ0BnW0tpODk4Oj88JTxJQ0BnW0trODk4Oj88LnpreHw1ZW16Z31pZm0mbG1qamlgSGN9JmlrJmltLms1OzEuYGxkNTg=&amp;url=https%3a%2f%2fen.wikipedia.org%2fwiki%2fKhalifa_University">Khalifa University</a> in Abu Dhabi, and founding director of the Khalifa University 6G Research Center. He has been working at the interface of AI and telecommunication and pioneered in 2021 the development of NOOR, the first Arabic LLM.</p><p><strong>Further reading &amp; resources</strong></p><ul><li><a href="https://pol.is/home">Polis</a> — a real-time participation platform</li><li><a href="https://vtaiwan-openai-2023.vercel.app/Report_%20Recursive%20Public.pdf">Recursive Public</a> by vTaiwan</li><li><a href="https://noor.tii.ae/">Noor</a> — the first LLM trained on the Arabic language</li><li><a href="https://falconfoundation.ai/">Falcon Foundation</a></li><li><a href="https://www.aisnakeoil.com/p/ai-snake-oil-is-now-available-to">Buy AI Snake Oil</a> by Sayash Kapoor and Arvind Narayanan</li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 29 Nov 2024 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/a4d1c248/755384ae.mp3" length="44984151" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3208</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In part two of our episode on open source AI, we delve deeper into we can use openness and participation for sustainable AI governance. It’s clear that everyone agrees that things like the proliferation of harmful content is a huge risk — but what we cannot seem to agree on is how to eliminate this risk.</p><p>Alix is joined again by <a href="https://en.wikipedia.org/wiki/Mark_Surman">Mark Surman</a>, and this time they both take a closer look at the work Audrey Tang did as Taiwan’s first digital minister, where she successfully built and implemented a participatory framework that allowed the people of Taiwan to directly inform AI policy.</p><p>We also hear more from Merouane Debbah, who built the first LLM trained in Arabic, and highlights the importance of developing AI systems that don’t follow rigid western benchmarks.</p><p>Mark Surman has spent three decades building a better internet, from the advent of the web to the rise of artificial intelligence. As President of Mozilla, a global nonprofit backed technology company that does everything from making Firefox to advocating for a more open, equitable internet, Mark’s current focus is ensuring the various Mozilla organizations work in concert to make trustworthy AI a reality. Mark led the creation of <a href="http://mozilla.ai/">Mozilla.ai</a> (a commercial AI R+D lab) and <a href="https://mozilla.vc/">Mozilla Ventures</a> (an impact venture fund with a strong focus on AI). Before joining Mozilla, Mark spent 15 years leading organizations and projects that promoted the use of the internet and open source as tools for social and economic development.</p><p>More about our guests:</p><p>Audrey Tang, Cyber Ambassador of Taiwan, served as Taiwan’s 1st digital minister (2016-2024) and the world’s 1st nonbinary cabinet minister. Tang played a crucial role in shaping g0v (gov-zero), one of the most prominent civic tech movements worldwide. In 2014, Tang helped broadcast the demands of Sunflower Movement activists, and worked to resolve conflicts during a three-week occupation of Taiwan’s legislature. Tang became a reverse mentor to the minister in charge of digital participation, before assuming the role in 2016 after the government changed hands. Tang helped develop participatory democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon.</p><p>Sayash Kapoor is a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is a coauthor of AI Snake Oil, a book that provides a critical analysis of artificial intelligence, separating the hype from the true advances. His research examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. He was included in TIME Magazine’s inaugural list of the 100 most influential people in AI.</p><p>Mérouane Debbah is a researcher, educator and technology entrepreneur. He has founded several public and industrial research centers, start-ups and held executive positions in ICT companies. He is professor at <a href="https://secureurl.ankabut.ac.ae/fmlurlsvc/?fewReq=:B:JVYyOT08Mi5+NTomOC5hbDU4OTI4OS57YW9maXx9em01MTBqaTFua25raWo+MT0+bDk8bDhsbWs+a2kwOjxqOWxtPm04ODgxPi58NTk/Ozo5Oj4xOzoueWFsNTxJQ0BnW0tpODk4Oj88JTxJQ0BnW0trODk4Oj88LnpreHw1ZW16Z31pZm0mbG1qamlgSGN9JmlrJmltLms1OzEuYGxkNTg=&amp;url=https%3a%2f%2fen.wikipedia.org%2fwiki%2fKhalifa_University">Khalifa University</a> in Abu Dhabi, and founding director of the Khalifa University 6G Research Center. He has been working at the interface of AI and telecommunication and pioneered in 2021 the development of NOOR, the first Arabic LLM.</p><p><strong>Further reading &amp; resources</strong></p><ul><li><a href="https://pol.is/home">Polis</a> — a real-time participation platform</li><li><a href="https://vtaiwan-openai-2023.vercel.app/Report_%20Recursive%20Public.pdf">Recursive Public</a> by vTaiwan</li><li><a href="https://noor.tii.ae/">Noor</a> — the first LLM trained on the Arabic language</li><li><a href="https://falconfoundation.ai/">Falcon Foundation</a></li><li><a href="https://www.aisnakeoil.com/p/ai-snake-oil-is-now-available-to">Buy AI Snake Oil</a> by Sayash Kapoor and Arvind Narayanan</li></ul>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>The Happy Few: Open Source AI (part one)</title>
      <itunes:episode>28</itunes:episode>
      <podcast:episode>28</podcast:episode>
      <itunes:title>The Happy Few: Open Source AI (part one)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0b6663c4-49cd-4761-b47b-7b35d2bb0557</guid>
      <link>https://share.transistor.fm/s/e53af2cd</link>
      <description>
        <![CDATA[<p>In the context of AI, what do we mean when we say ‘open source’? An AI model is not something you can straightforwardly open up like a piece of software; there are huge technical and social considerations to be made.</p><p>Is it risky to open-source highly capable foundation models? What guardrails do we need to think about when it comes to the proliferation of harmful content? And, can you really call it ‘open’ if the barrier for accessing compute is so high? Is model alignment really the only thing we have to protect us?</p><p>In this two-parter, Alix is joined by Mozilla president <a href="https://en.wikipedia.org/wiki/Mark_Surman">Mark Surman</a> to discuss the benefits and drawbacks of open and closed models. Our guests are Alondra Nelson, Merouane Debbah, Audrey Tang, and Sayash Kapoor.</p><p>Listen to learn about the early years of the free software movement, the ecosystem lock-in of the closed-source environment, and what kinds of things are possible with a more open approach to AI.</p><p>Mark Surman has spent three decades building a better internet, from the advent of the web to the rise of artificial intelligence. As President of Mozilla, a global nonprofit backed technology company that does everything from making Firefox to advocating for a more open, equitable internet, Mark’s current focus is ensuring the various Mozilla organizations work in concert to make trustworthy AI a reality. Mark led the creation of <a href="http://mozilla.ai/">Mozilla.ai</a> (a commercial AI R+D lab) and <a href="https://mozilla.vc/">Mozilla Ventures</a> (an impact venture fund with a strong focus on AI). Before joining Mozilla, Mark spent 15 years leading organizations and projects that promoted the use of the internet and open source as tools for social and economic development.</p><p>More about our guests:</p><p>Audrey Tang, Cyber Ambassador of Taiwan, served as Taiwan’s 1st digital minister (2016-2024) and the world’s 1st nonbinary cabinet minister. Tang played a crucial role in shaping g0v (gov-zero), one of the most prominent civic tech movements worldwide. In 2014, Tang helped broadcast the demands of Sunflower Movement activists, and worked to resolve conflicts during a three-week occupation of Taiwan’s legislature. Tang became a reverse mentor to the minister in charge of digital participation, before assuming the role in 2016 after the government changed hands. Tang helped develop participatory democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon.</p><p>Alondra Nelson is s scholar of the intersections of science, technology, policy, and society, and the <a href="https://www.ias.edu/sss/faculty/nelson">Harold F. Linder Professor</a> at the <a href="https://www.ias.edu/welcome">Institute for Advanced Study</a>, an independent research center in Princeton, New Jersey. Dr. Nelson was formerly deputy assistant to President Joe Biden and acting director of <a href="https://www.whitehouse.gov/ostp/directors-office/">the White House Office of Science and Technology Policy</a> (OSTP). In this role, she spearheaded <a href="https://www.google.com/search?client=firefox-b-1-d&amp;q=%22alondra+nelson%22+blueprint+for+an+ai+bill+of+rights&amp;tbm=nws&amp;sa=X&amp;ved=2ahUKEwjil7j-k7P_AhVrEFkFHe0JBCMQ0pQJegQIDBAB&amp;biw=1440&amp;bih=669&amp;dpr=2">the development of the Blueprint for an AI Bill of Rights</a>, and was the first African American and first woman of color to lead US science and technology policy.</p><p>Sayash Kapoor is a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is a coauthor of AI Snake Oil, a book that provides a critical analysis of artificial intelligence, separating the hype from the true advances. His research examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. He was included in TIME Magazine’s inaugural list of the 100 most influential people in AI.</p><p>Mérouane Debbah is a researcher, educator and technology entrepreneur. He has founded several public and industrial research centers, start-ups and held executive positions in ICT companies. He is professor at <a href="https://secureurl.ankabut.ac.ae/fmlurlsvc/?fewReq=:B:JVYyOT08Mi5+NTomOC5hbDU4OTI4OS57YW9maXx9em01MTBqaTFua25raWo+MT0+bDk8bDhsbWs+a2kwOjxqOWxtPm04ODgxPi58NTk/Ozo5Oj4xOzoueWFsNTxJQ0BnW0tpODk4Oj88JTxJQ0BnW0trODk4Oj88LnpreHw1ZW16Z31pZm0mbG1qamlgSGN9JmlrJmltLms1OzEuYGxkNTg=&amp;url=https%3a%2f%2fen.wikipedia.org%2fwiki%2fKhalifa_University">Khalifa University</a> in Abu Dhabi, and founding director of the Khalifa University 6G Research Center. He has been working at the interface of AI and telecommunication and pioneered in 2021 the development of NOOR, the first Arabic LLM.</p><p><strong>Further reading &amp; resources</strong></p><ul><li><a href="https://pol.is/home">Polis</a> — a real-time participation platform</li><li><a href="https://vtaiwan-openai-2023.vercel.app/Report_%20Recursive%20Public.pdf">Recursive Public</a> by vTaiwan</li><li><a href="https://noor.tii.ae/">Noor</a> — the first LLM trained on the Arabic language</li><li><a href="https://falconfoundation.ai/">Falcon Foundation</a></li><li><a href="https://www.aisnakeoil.com/p/ai-snake-oil-is-now-available-to">Buy AI Snake Oil</a> by Sayash Kapoor and Arvind Narayanan</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In the context of AI, what do we mean when we say ‘open source’? An AI model is not something you can straightforwardly open up like a piece of software; there are huge technical and social considerations to be made.</p><p>Is it risky to open-source highly capable foundation models? What guardrails do we need to think about when it comes to the proliferation of harmful content? And, can you really call it ‘open’ if the barrier for accessing compute is so high? Is model alignment really the only thing we have to protect us?</p><p>In this two-parter, Alix is joined by Mozilla president <a href="https://en.wikipedia.org/wiki/Mark_Surman">Mark Surman</a> to discuss the benefits and drawbacks of open and closed models. Our guests are Alondra Nelson, Merouane Debbah, Audrey Tang, and Sayash Kapoor.</p><p>Listen to learn about the early years of the free software movement, the ecosystem lock-in of the closed-source environment, and what kinds of things are possible with a more open approach to AI.</p><p>Mark Surman has spent three decades building a better internet, from the advent of the web to the rise of artificial intelligence. As President of Mozilla, a global nonprofit backed technology company that does everything from making Firefox to advocating for a more open, equitable internet, Mark’s current focus is ensuring the various Mozilla organizations work in concert to make trustworthy AI a reality. Mark led the creation of <a href="http://mozilla.ai/">Mozilla.ai</a> (a commercial AI R+D lab) and <a href="https://mozilla.vc/">Mozilla Ventures</a> (an impact venture fund with a strong focus on AI). Before joining Mozilla, Mark spent 15 years leading organizations and projects that promoted the use of the internet and open source as tools for social and economic development.</p><p>More about our guests:</p><p>Audrey Tang, Cyber Ambassador of Taiwan, served as Taiwan’s 1st digital minister (2016-2024) and the world’s 1st nonbinary cabinet minister. Tang played a crucial role in shaping g0v (gov-zero), one of the most prominent civic tech movements worldwide. In 2014, Tang helped broadcast the demands of Sunflower Movement activists, and worked to resolve conflicts during a three-week occupation of Taiwan’s legislature. Tang became a reverse mentor to the minister in charge of digital participation, before assuming the role in 2016 after the government changed hands. Tang helped develop participatory democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon.</p><p>Alondra Nelson is s scholar of the intersections of science, technology, policy, and society, and the <a href="https://www.ias.edu/sss/faculty/nelson">Harold F. Linder Professor</a> at the <a href="https://www.ias.edu/welcome">Institute for Advanced Study</a>, an independent research center in Princeton, New Jersey. Dr. Nelson was formerly deputy assistant to President Joe Biden and acting director of <a href="https://www.whitehouse.gov/ostp/directors-office/">the White House Office of Science and Technology Policy</a> (OSTP). In this role, she spearheaded <a href="https://www.google.com/search?client=firefox-b-1-d&amp;q=%22alondra+nelson%22+blueprint+for+an+ai+bill+of+rights&amp;tbm=nws&amp;sa=X&amp;ved=2ahUKEwjil7j-k7P_AhVrEFkFHe0JBCMQ0pQJegQIDBAB&amp;biw=1440&amp;bih=669&amp;dpr=2">the development of the Blueprint for an AI Bill of Rights</a>, and was the first African American and first woman of color to lead US science and technology policy.</p><p>Sayash Kapoor is a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is a coauthor of AI Snake Oil, a book that provides a critical analysis of artificial intelligence, separating the hype from the true advances. His research examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. He was included in TIME Magazine’s inaugural list of the 100 most influential people in AI.</p><p>Mérouane Debbah is a researcher, educator and technology entrepreneur. He has founded several public and industrial research centers, start-ups and held executive positions in ICT companies. He is professor at <a href="https://secureurl.ankabut.ac.ae/fmlurlsvc/?fewReq=:B:JVYyOT08Mi5+NTomOC5hbDU4OTI4OS57YW9maXx9em01MTBqaTFua25raWo+MT0+bDk8bDhsbWs+a2kwOjxqOWxtPm04ODgxPi58NTk/Ozo5Oj4xOzoueWFsNTxJQ0BnW0tpODk4Oj88JTxJQ0BnW0trODk4Oj88LnpreHw1ZW16Z31pZm0mbG1qamlgSGN9JmlrJmltLms1OzEuYGxkNTg=&amp;url=https%3a%2f%2fen.wikipedia.org%2fwiki%2fKhalifa_University">Khalifa University</a> in Abu Dhabi, and founding director of the Khalifa University 6G Research Center. He has been working at the interface of AI and telecommunication and pioneered in 2021 the development of NOOR, the first Arabic LLM.</p><p><strong>Further reading &amp; resources</strong></p><ul><li><a href="https://pol.is/home">Polis</a> — a real-time participation platform</li><li><a href="https://vtaiwan-openai-2023.vercel.app/Report_%20Recursive%20Public.pdf">Recursive Public</a> by vTaiwan</li><li><a href="https://noor.tii.ae/">Noor</a> — the first LLM trained on the Arabic language</li><li><a href="https://falconfoundation.ai/">Falcon Foundation</a></li><li><a href="https://www.aisnakeoil.com/p/ai-snake-oil-is-now-available-to">Buy AI Snake Oil</a> by Sayash Kapoor and Arvind Narayanan</li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 22 Nov 2024 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/e53af2cd/8161b842.mp3" length="48873789" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3486</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In the context of AI, what do we mean when we say ‘open source’? An AI model is not something you can straightforwardly open up like a piece of software; there are huge technical and social considerations to be made.</p><p>Is it risky to open-source highly capable foundation models? What guardrails do we need to think about when it comes to the proliferation of harmful content? And, can you really call it ‘open’ if the barrier for accessing compute is so high? Is model alignment really the only thing we have to protect us?</p><p>In this two-parter, Alix is joined by Mozilla president <a href="https://en.wikipedia.org/wiki/Mark_Surman">Mark Surman</a> to discuss the benefits and drawbacks of open and closed models. Our guests are Alondra Nelson, Merouane Debbah, Audrey Tang, and Sayash Kapoor.</p><p>Listen to learn about the early years of the free software movement, the ecosystem lock-in of the closed-source environment, and what kinds of things are possible with a more open approach to AI.</p><p>Mark Surman has spent three decades building a better internet, from the advent of the web to the rise of artificial intelligence. As President of Mozilla, a global nonprofit backed technology company that does everything from making Firefox to advocating for a more open, equitable internet, Mark’s current focus is ensuring the various Mozilla organizations work in concert to make trustworthy AI a reality. Mark led the creation of <a href="http://mozilla.ai/">Mozilla.ai</a> (a commercial AI R+D lab) and <a href="https://mozilla.vc/">Mozilla Ventures</a> (an impact venture fund with a strong focus on AI). Before joining Mozilla, Mark spent 15 years leading organizations and projects that promoted the use of the internet and open source as tools for social and economic development.</p><p>More about our guests:</p><p>Audrey Tang, Cyber Ambassador of Taiwan, served as Taiwan’s 1st digital minister (2016-2024) and the world’s 1st nonbinary cabinet minister. Tang played a crucial role in shaping g0v (gov-zero), one of the most prominent civic tech movements worldwide. In 2014, Tang helped broadcast the demands of Sunflower Movement activists, and worked to resolve conflicts during a three-week occupation of Taiwan’s legislature. Tang became a reverse mentor to the minister in charge of digital participation, before assuming the role in 2016 after the government changed hands. Tang helped develop participatory democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon.</p><p>Alondra Nelson is s scholar of the intersections of science, technology, policy, and society, and the <a href="https://www.ias.edu/sss/faculty/nelson">Harold F. Linder Professor</a> at the <a href="https://www.ias.edu/welcome">Institute for Advanced Study</a>, an independent research center in Princeton, New Jersey. Dr. Nelson was formerly deputy assistant to President Joe Biden and acting director of <a href="https://www.whitehouse.gov/ostp/directors-office/">the White House Office of Science and Technology Policy</a> (OSTP). In this role, she spearheaded <a href="https://www.google.com/search?client=firefox-b-1-d&amp;q=%22alondra+nelson%22+blueprint+for+an+ai+bill+of+rights&amp;tbm=nws&amp;sa=X&amp;ved=2ahUKEwjil7j-k7P_AhVrEFkFHe0JBCMQ0pQJegQIDBAB&amp;biw=1440&amp;bih=669&amp;dpr=2">the development of the Blueprint for an AI Bill of Rights</a>, and was the first African American and first woman of color to lead US science and technology policy.</p><p>Sayash Kapoor is a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is a coauthor of AI Snake Oil, a book that provides a critical analysis of artificial intelligence, separating the hype from the true advances. His research examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. He was included in TIME Magazine’s inaugural list of the 100 most influential people in AI.</p><p>Mérouane Debbah is a researcher, educator and technology entrepreneur. He has founded several public and industrial research centers, start-ups and held executive positions in ICT companies. He is professor at <a href="https://secureurl.ankabut.ac.ae/fmlurlsvc/?fewReq=:B:JVYyOT08Mi5+NTomOC5hbDU4OTI4OS57YW9maXx9em01MTBqaTFua25raWo+MT0+bDk8bDhsbWs+a2kwOjxqOWxtPm04ODgxPi58NTk/Ozo5Oj4xOzoueWFsNTxJQ0BnW0tpODk4Oj88JTxJQ0BnW0trODk4Oj88LnpreHw1ZW16Z31pZm0mbG1qamlgSGN9JmlrJmltLms1OzEuYGxkNTg=&amp;url=https%3a%2f%2fen.wikipedia.org%2fwiki%2fKhalifa_University">Khalifa University</a> in Abu Dhabi, and founding director of the Khalifa University 6G Research Center. He has been working at the interface of AI and telecommunication and pioneered in 2021 the development of NOOR, the first Arabic LLM.</p><p><strong>Further reading &amp; resources</strong></p><ul><li><a href="https://pol.is/home">Polis</a> — a real-time participation platform</li><li><a href="https://vtaiwan-openai-2023.vercel.app/Report_%20Recursive%20Public.pdf">Recursive Public</a> by vTaiwan</li><li><a href="https://noor.tii.ae/">Noor</a> — the first LLM trained on the Arabic language</li><li><a href="https://falconfoundation.ai/">Falcon Foundation</a></li><li><a href="https://www.aisnakeoil.com/p/ai-snake-oil-is-now-available-to">Buy AI Snake Oil</a> by Sayash Kapoor and Arvind Narayanan</li></ul>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Algorithmically cutting benefits w/ Kevin De Liban</title>
      <itunes:episode>27</itunes:episode>
      <podcast:episode>27</podcast:episode>
      <itunes:title>Algorithmically cutting benefits w/ Kevin De Liban</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d3d4933b-2c01-4f2a-b11c-639978094fc9</guid>
      <link>https://share.transistor.fm/s/c6c7150a</link>
      <description>
        <![CDATA[<p>This week Alix was joined by Kevin De Liban, who just launched Techntonic Justice, an organisation designed to support and fight for those harmed by AI systems.</p><p>In this episode Kevin describes his experiences litigating on behalf of people in Arkansas who found their in-home care hours cut aggressively by an algorithm administered by the state. This is a story about taking care away from individuals in the name of ‘efficiency’, and the particular levers for justice that Kevin and his team managed to take advantage of to eventually ban the use of this algorithm in Arkansas.</p><p><strong>CW: This episode contains descriptions of people being denied care and left in undignified situations at around 08.17- 08.40 and 27.12-28.07<br></strong><br></p><p>Further reading &amp; resources:</p><ul><li><a href="https://www.techtonicjustice.org/">Techtonic Justice</a></li></ul><p>Kevin De Liban is the founder of Techtonic Justice, and the Director of Advocacy at Legal Aid of Arkansas, nurturing multi-dimensional efforts to improve the lives of low-income Arkansans in matters of health, workers' rights, safety net benefits, housing, consumer rights, and domestic violence. With Legal Aid, he has led a successful litigation campaign in federal and state courts challenging Arkansas's use of an algorithm to cut vital Medicaid home-care benefits to individuals who have disabilities or are elderly.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week Alix was joined by Kevin De Liban, who just launched Techntonic Justice, an organisation designed to support and fight for those harmed by AI systems.</p><p>In this episode Kevin describes his experiences litigating on behalf of people in Arkansas who found their in-home care hours cut aggressively by an algorithm administered by the state. This is a story about taking care away from individuals in the name of ‘efficiency’, and the particular levers for justice that Kevin and his team managed to take advantage of to eventually ban the use of this algorithm in Arkansas.</p><p><strong>CW: This episode contains descriptions of people being denied care and left in undignified situations at around 08.17- 08.40 and 27.12-28.07<br></strong><br></p><p>Further reading &amp; resources:</p><ul><li><a href="https://www.techtonicjustice.org/">Techtonic Justice</a></li></ul><p>Kevin De Liban is the founder of Techtonic Justice, and the Director of Advocacy at Legal Aid of Arkansas, nurturing multi-dimensional efforts to improve the lives of low-income Arkansans in matters of health, workers' rights, safety net benefits, housing, consumer rights, and domestic violence. With Legal Aid, he has led a successful litigation campaign in federal and state courts challenging Arkansas's use of an algorithm to cut vital Medicaid home-care benefits to individuals who have disabilities or are elderly.</p>]]>
      </content:encoded>
      <pubDate>Fri, 15 Nov 2024 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/c6c7150a/66dd19a1.mp3" length="72401329" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3013</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week Alix was joined by Kevin De Liban, who just launched Techntonic Justice, an organisation designed to support and fight for those harmed by AI systems.</p><p>In this episode Kevin describes his experiences litigating on behalf of people in Arkansas who found their in-home care hours cut aggressively by an algorithm administered by the state. This is a story about taking care away from individuals in the name of ‘efficiency’, and the particular levers for justice that Kevin and his team managed to take advantage of to eventually ban the use of this algorithm in Arkansas.</p><p><strong>CW: This episode contains descriptions of people being denied care and left in undignified situations at around 08.17- 08.40 and 27.12-28.07<br></strong><br></p><p>Further reading &amp; resources:</p><ul><li><a href="https://www.techtonicjustice.org/">Techtonic Justice</a></li></ul><p>Kevin De Liban is the founder of Techtonic Justice, and the Director of Advocacy at Legal Aid of Arkansas, nurturing multi-dimensional efforts to improve the lives of low-income Arkansans in matters of health, workers' rights, safety net benefits, housing, consumer rights, and domestic violence. With Legal Aid, he has led a successful litigation campaign in federal and state courts challenging Arkansas's use of an algorithm to cut vital Medicaid home-care benefits to individuals who have disabilities or are elderly.</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Election Debrief</title>
      <itunes:episode>26</itunes:episode>
      <podcast:episode>26</podcast:episode>
      <itunes:title>Election Debrief</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b723c634-2dd2-4fbb-b5b1-43d2689dd17b</guid>
      <link>https://share.transistor.fm/s/2bb775f4</link>
      <description>
        <![CDATA[<p>This week we’re wallowing in post-election catharsis: Alix and Prathm process the result together, and discuss the implications this administration has for technology politics.</p><p>How much of a role will people like Elon Musk and Peter Thiel play during Trump’s presidency? What kind of tactics should the left adopt going forward to stop this from happening again? And what does this mean for the technology politics community?</p><p>This episode was recorded on Wednesday the 6th of November; we don’t have all the answers but we know we want to move forward and have never been more motivated to make change happen.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week we’re wallowing in post-election catharsis: Alix and Prathm process the result together, and discuss the implications this administration has for technology politics.</p><p>How much of a role will people like Elon Musk and Peter Thiel play during Trump’s presidency? What kind of tactics should the left adopt going forward to stop this from happening again? And what does this mean for the technology politics community?</p><p>This episode was recorded on Wednesday the 6th of November; we don’t have all the answers but we know we want to move forward and have never been more motivated to make change happen.</p>]]>
      </content:encoded>
      <pubDate>Thu, 07 Nov 2024 23:11:59 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/2bb775f4/647e79dc.mp3" length="58471168" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2434</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week we’re wallowing in post-election catharsis: Alix and Prathm process the result together, and discuss the implications this administration has for technology politics.</p><p>How much of a role will people like Elon Musk and Peter Thiel play during Trump’s presidency? What kind of tactics should the left adopt going forward to stop this from happening again? And what does this mean for the technology politics community?</p><p>This episode was recorded on Wednesday the 6th of November; we don’t have all the answers but we know we want to move forward and have never been more motivated to make change happen.</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
    </item>
    <item>
      <title>US Election Special w/ Spencer Overton</title>
      <itunes:episode>25</itunes:episode>
      <podcast:episode>25</podcast:episode>
      <itunes:title>US Election Special w/ Spencer Overton</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d7e16fec-b4d2-49e8-a7e8-d763ac5dcba5</guid>
      <link>https://share.transistor.fm/s/b55e5f7b</link>
      <description>
        <![CDATA[<p>For this pre-election special, Prathm spoke with law professor <a href="https://www.law.gwu.edu/spencer-overton">Spencer Overton</a> about how this election has — and hasn’t — be impacted by AI systems. Misinformation and deepfakes appear to be top of the agenda for a lot politicians and commentators, but there’s a lot more to think about…</p><p>Spencer discusses the USA’s transition into a multiracial democracy, and describes the ongoing cultural anxiety that comes with that — and how that filters down into the politicisation of AI tools, both as fuel for moral panics, as well as being used to suppress voters of colour.</p><p>Further reading:</p><ul><li><a href="https://www.idea.int/publications/catalogue/artificial-intelligence-electoral-management">Artificial Intelligence for Electoral Management | International IDEA</a></li><li><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4754903">Overcoming Racial Harms to Democracy from Artificial Intelligence by Spencer Overton :: SSRN</a></li><li><a href="https://www.technologyreview.com/2024/09/03/1103464/ai-impact-elections-overblown/">AI’s impact on elections is being overblown | MIT Technology Review</a></li><li><a href="https://www.brennancenter.org/our-work/research-reports/effects-shelby-county-v-holder-voting-rights-act">Effects of Shelby County v. Holder on the Voting Rights Act | Brennan Center for Justice</a></li></ul><p><a href="https://www.linkedin.com/in/spencer-overton-922463ba/">Spencer Overton</a> is the Patricia Roberts Harris Research Professor at GW Law School. As the Director of the Multiracial Democracy Project at the GW Equity Institute, he focuses on producing and supporting research that grapples with challenges to a well-functioning multiracial democracy. He is currently working on research projects related to the regulation of AI to facilitate a well-functioning multiracial democracy and the implications of alternative voting systems for multiracial democracy.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>For this pre-election special, Prathm spoke with law professor <a href="https://www.law.gwu.edu/spencer-overton">Spencer Overton</a> about how this election has — and hasn’t — be impacted by AI systems. Misinformation and deepfakes appear to be top of the agenda for a lot politicians and commentators, but there’s a lot more to think about…</p><p>Spencer discusses the USA’s transition into a multiracial democracy, and describes the ongoing cultural anxiety that comes with that — and how that filters down into the politicisation of AI tools, both as fuel for moral panics, as well as being used to suppress voters of colour.</p><p>Further reading:</p><ul><li><a href="https://www.idea.int/publications/catalogue/artificial-intelligence-electoral-management">Artificial Intelligence for Electoral Management | International IDEA</a></li><li><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4754903">Overcoming Racial Harms to Democracy from Artificial Intelligence by Spencer Overton :: SSRN</a></li><li><a href="https://www.technologyreview.com/2024/09/03/1103464/ai-impact-elections-overblown/">AI’s impact on elections is being overblown | MIT Technology Review</a></li><li><a href="https://www.brennancenter.org/our-work/research-reports/effects-shelby-county-v-holder-voting-rights-act">Effects of Shelby County v. Holder on the Voting Rights Act | Brennan Center for Justice</a></li></ul><p><a href="https://www.linkedin.com/in/spencer-overton-922463ba/">Spencer Overton</a> is the Patricia Roberts Harris Research Professor at GW Law School. As the Director of the Multiracial Democracy Project at the GW Equity Institute, he focuses on producing and supporting research that grapples with challenges to a well-functioning multiracial democracy. He is currently working on research projects related to the regulation of AI to facilitate a well-functioning multiracial democracy and the implications of alternative voting systems for multiracial democracy.</p>]]>
      </content:encoded>
      <pubDate>Fri, 01 Nov 2024 01:01:00 -0100</pubDate>
      <author>Prathm Juneja</author>
      <enclosure url="https://media.transistor.fm/b55e5f7b/4d01f950.mp3" length="74174830" type="audio/mpeg"/>
      <itunes:author>Prathm Juneja</itunes:author>
      <itunes:duration>3088</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>For this pre-election special, Prathm spoke with law professor <a href="https://www.law.gwu.edu/spencer-overton">Spencer Overton</a> about how this election has — and hasn’t — be impacted by AI systems. Misinformation and deepfakes appear to be top of the agenda for a lot politicians and commentators, but there’s a lot more to think about…</p><p>Spencer discusses the USA’s transition into a multiracial democracy, and describes the ongoing cultural anxiety that comes with that — and how that filters down into the politicisation of AI tools, both as fuel for moral panics, as well as being used to suppress voters of colour.</p><p>Further reading:</p><ul><li><a href="https://www.idea.int/publications/catalogue/artificial-intelligence-electoral-management">Artificial Intelligence for Electoral Management | International IDEA</a></li><li><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4754903">Overcoming Racial Harms to Democracy from Artificial Intelligence by Spencer Overton :: SSRN</a></li><li><a href="https://www.technologyreview.com/2024/09/03/1103464/ai-impact-elections-overblown/">AI’s impact on elections is being overblown | MIT Technology Review</a></li><li><a href="https://www.brennancenter.org/our-work/research-reports/effects-shelby-county-v-holder-voting-rights-act">Effects of Shelby County v. Holder on the Voting Rights Act | Brennan Center for Justice</a></li></ul><p><a href="https://www.linkedin.com/in/spencer-overton-922463ba/">Spencer Overton</a> is the Patricia Roberts Harris Research Professor at GW Law School. As the Director of the Multiracial Democracy Project at the GW Equity Institute, he focuses on producing and supporting research that grapples with challenges to a well-functioning multiracial democracy. He is currently working on research projects related to the regulation of AI to facilitate a well-functioning multiracial democracy and the implications of alternative voting systems for multiracial democracy.</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Net 0 ++: Reporting on AI’s climate injustices w/ Karen Hao</title>
      <itunes:episode>24</itunes:episode>
      <podcast:episode>24</podcast:episode>
      <itunes:title>Net 0 ++: Reporting on AI’s climate injustices w/ Karen Hao</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">23f1bf7e-98c3-4c50-82e7-03e9770b4fae</guid>
      <link>https://share.transistor.fm/s/905fbad5</link>
      <description>
        <![CDATA[<p>For our final episode in this series on the environment, Alix interviewed Karen Hao on how tough it is to report on environmental impacts of AI.</p><p>The conversation focusses on two of Karen’s recent stories, linked below. One of the biggest barriers to consistent reporting on AI’s climate injustices is the sheer opaqueness of information about what companies are trying to do when building infrastructure and what they think the actual costs — primarily of energy use and water — will be. Tech companies that Karen has written about enter communities via shell companies and promise relatively big deals for small municipalities if they allow development of new data centres — and community members often don’t know what they’re signing up for before it’s too late.</p><p>Listen to learn about how difficult it is to report on this industry, and the tactics and methods Karen has to use to tell her stories.</p><p>Further reading:</p><ul><li><a href="https://www.theatlantic.com/technology/archive/2024/09/microsoft-ai-oil-contracts/679804/">Microsoft’s Hypocrisy on AI</a> by Karen Hao</li><li><a href="https://www.theatlantic.com/technology/archive/2024/03/ai-water-climate-microsoft/677602/">AI is Taking Water from the Desert</a> by Karen Hao</li></ul><p>Karen Hao is an American journalist who writes for publications like The Atlantic. She was previously a foreign correspondent based in Hong Kong for The Wall Street Journal and a senior artificial intelligence editor at the MIT Technology Review. She is best known for her coverage on AI research, technology ethics and the social impact of AI.</p><p><br></p><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>For our final episode in this series on the environment, Alix interviewed Karen Hao on how tough it is to report on environmental impacts of AI.</p><p>The conversation focusses on two of Karen’s recent stories, linked below. One of the biggest barriers to consistent reporting on AI’s climate injustices is the sheer opaqueness of information about what companies are trying to do when building infrastructure and what they think the actual costs — primarily of energy use and water — will be. Tech companies that Karen has written about enter communities via shell companies and promise relatively big deals for small municipalities if they allow development of new data centres — and community members often don’t know what they’re signing up for before it’s too late.</p><p>Listen to learn about how difficult it is to report on this industry, and the tactics and methods Karen has to use to tell her stories.</p><p>Further reading:</p><ul><li><a href="https://www.theatlantic.com/technology/archive/2024/09/microsoft-ai-oil-contracts/679804/">Microsoft’s Hypocrisy on AI</a> by Karen Hao</li><li><a href="https://www.theatlantic.com/technology/archive/2024/03/ai-water-climate-microsoft/677602/">AI is Taking Water from the Desert</a> by Karen Hao</li></ul><p>Karen Hao is an American journalist who writes for publications like The Atlantic. She was previously a foreign correspondent based in Hong Kong for The Wall Street Journal and a senior artificial intelligence editor at the MIT Technology Review. She is best known for her coverage on AI research, technology ethics and the social impact of AI.</p><p><br></p><p><br></p>]]>
      </content:encoded>
      <pubDate>Fri, 01 Nov 2024 01:00:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/905fbad5/51a071a1.mp3" length="38697276" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>1610</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>For our final episode in this series on the environment, Alix interviewed Karen Hao on how tough it is to report on environmental impacts of AI.</p><p>The conversation focusses on two of Karen’s recent stories, linked below. One of the biggest barriers to consistent reporting on AI’s climate injustices is the sheer opaqueness of information about what companies are trying to do when building infrastructure and what they think the actual costs — primarily of energy use and water — will be. Tech companies that Karen has written about enter communities via shell companies and promise relatively big deals for small municipalities if they allow development of new data centres — and community members often don’t know what they’re signing up for before it’s too late.</p><p>Listen to learn about how difficult it is to report on this industry, and the tactics and methods Karen has to use to tell her stories.</p><p>Further reading:</p><ul><li><a href="https://www.theatlantic.com/technology/archive/2024/09/microsoft-ai-oil-contracts/679804/">Microsoft’s Hypocrisy on AI</a> by Karen Hao</li><li><a href="https://www.theatlantic.com/technology/archive/2024/03/ai-water-climate-microsoft/677602/">AI is Taking Water from the Desert</a> by Karen Hao</li></ul><p>Karen Hao is an American journalist who writes for publications like The Atlantic. She was previously a foreign correspondent based in Hong Kong for The Wall Street Journal and a senior artificial intelligence editor at the MIT Technology Review. She is best known for her coverage on AI research, technology ethics and the social impact of AI.</p><p><br></p><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Net 0++: Concrete arguments for AI</title>
      <itunes:episode>23</itunes:episode>
      <podcast:episode>23</podcast:episode>
      <itunes:title>Net 0++: Concrete arguments for AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c0179402-fabf-450f-9a8d-0f27dee13e32</guid>
      <link>https://share.transistor.fm/s/bc50b729</link>
      <description>
        <![CDATA[<p>In our third episode about AI &amp; the environment, Alix interviewed Sherif Elsayed-Ali, who’s been working on using AI to reduce the carbon emissions of concrete. Yes, that’s right — concrete.</p><p>This may seem like a very niche place to focus a green initiative on but it isn’t; concrete is the second most used substance in the world because it’s integral to modern infrastructure, and there’s no other material like it. It’s also one of the biggest carbon emitters in the world.</p><p>In this episode Sherif explains how AI and machine learning can make the process of concrete production more precise and efficient so that it burns much less fuel. Listen to learn about the big picture of global carbon emissions, and how AI can actually be used to actually reduce carbon output, rather than just monitor it — or add to it!</p><p><a href="https://www.linkedin.com/in/sherifea/?originalSubdomain=uk">Sherif Elsayed-Ali</a> trained as a civil engineer, then studied international human rights law and public policy and administration. He worked with the UN and in the non-profit sector on humanitarian and human rights research and policy, before embarking on a career in tech and climate.</p><p>Sherif founded Amnesty Tech, a group at the forefront of technology and human rights. He then joined Element AI (today Service Now Research), starting and leading its AI for Climate work. In 2020, he co-founded and became CEO of Carbon Re, an industrial AI company spun out of Cambridge University and UCL, developing novel solutions for decarbonising cement. He then co-founded Nexus Climate, a company providing climate tech advisory services and supporting the startup ecosystem.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In our third episode about AI &amp; the environment, Alix interviewed Sherif Elsayed-Ali, who’s been working on using AI to reduce the carbon emissions of concrete. Yes, that’s right — concrete.</p><p>This may seem like a very niche place to focus a green initiative on but it isn’t; concrete is the second most used substance in the world because it’s integral to modern infrastructure, and there’s no other material like it. It’s also one of the biggest carbon emitters in the world.</p><p>In this episode Sherif explains how AI and machine learning can make the process of concrete production more precise and efficient so that it burns much less fuel. Listen to learn about the big picture of global carbon emissions, and how AI can actually be used to actually reduce carbon output, rather than just monitor it — or add to it!</p><p><a href="https://www.linkedin.com/in/sherifea/?originalSubdomain=uk">Sherif Elsayed-Ali</a> trained as a civil engineer, then studied international human rights law and public policy and administration. He worked with the UN and in the non-profit sector on humanitarian and human rights research and policy, before embarking on a career in tech and climate.</p><p>Sherif founded Amnesty Tech, a group at the forefront of technology and human rights. He then joined Element AI (today Service Now Research), starting and leading its AI for Climate work. In 2020, he co-founded and became CEO of Carbon Re, an industrial AI company spun out of Cambridge University and UCL, developing novel solutions for decarbonising cement. He then co-founded Nexus Climate, a company providing climate tech advisory services and supporting the startup ecosystem.</p>]]>
      </content:encoded>
      <pubDate>Fri, 25 Oct 2024 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/bc50b729/8b7972f2.mp3" length="47873938" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>1992</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In our third episode about AI &amp; the environment, Alix interviewed Sherif Elsayed-Ali, who’s been working on using AI to reduce the carbon emissions of concrete. Yes, that’s right — concrete.</p><p>This may seem like a very niche place to focus a green initiative on but it isn’t; concrete is the second most used substance in the world because it’s integral to modern infrastructure, and there’s no other material like it. It’s also one of the biggest carbon emitters in the world.</p><p>In this episode Sherif explains how AI and machine learning can make the process of concrete production more precise and efficient so that it burns much less fuel. Listen to learn about the big picture of global carbon emissions, and how AI can actually be used to actually reduce carbon output, rather than just monitor it — or add to it!</p><p><a href="https://www.linkedin.com/in/sherifea/?originalSubdomain=uk">Sherif Elsayed-Ali</a> trained as a civil engineer, then studied international human rights law and public policy and administration. He worked with the UN and in the non-profit sector on humanitarian and human rights research and policy, before embarking on a career in tech and climate.</p><p>Sherif founded Amnesty Tech, a group at the forefront of technology and human rights. He then joined Element AI (today Service Now Research), starting and leading its AI for Climate work. In 2020, he co-founded and became CEO of Carbon Re, an industrial AI company spun out of Cambridge University and UCL, developing novel solutions for decarbonising cement. He then co-founded Nexus Climate, a company providing climate tech advisory services and supporting the startup ecosystem.</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Net 0++: Big Dirty Data Centres </title>
      <itunes:episode>22</itunes:episode>
      <podcast:episode>22</podcast:episode>
      <itunes:title>Net 0++: Big Dirty Data Centres </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6fe0d5ad-85af-4448-95ea-00c2d885e5dc</guid>
      <link>https://share.transistor.fm/s/e46a1256</link>
      <description>
        <![CDATA[<p>This week we are continuing our AI &amp; Environment series with an episode about a key piece of AI infrastructure: data centres. With us this week are Boxi Wu and Jenna Ruddock to explain how data centres are a gruesomely sharp double-edged sword.</p><p>They contribute to huge amounts of environmental degradation via local water and energy consumption, and impact the health of surrounding communities with incessant noise pollution. Data centres are also used as a political springboard for global leaders, where the expansion of AI infrastructure is seen as being synonymous with progress and economic growth.</p><p>Boxi and Jenna talk us through the various community concerns that come with data centre development, and the kind of pushback we’re seeing in the UK and the US right now.</p><p><a href="https://www.oii.ox.ac.uk/people/profiles/boxi-wu/">Boxi Wu</a> is a DPhil researcher at the Oxford Internet Institute and a Research Policy Consultant with the OECD’s AI Policy Observatory. Their research focuses on the politics of AI infrastructure within the context of increasing global inequality and the current climate crisis. Prior to returning to academia, Boxi worked in AI ethics, technology consulting and policy research. Most recently, they worked in AI Ethics &amp; Safety at Google DeepMind where they specialised in the ethics of LLMs and led the responsible release of frontier AI models including the initially released Gemini models.</p><p>Jenna Ruddock is a researcher and advocate working at the intersections of law, technology, media, and environmental justice. Currently, she is policy counsel at Free Press, where she focuses on digital civil rights, surveillance, privacy, and media infrastructures. She has been a visiting fellow at the University of Amsterdam's critical infrastructure lab (<a href="http://criticalinfralab.net/">criticalinfralab.net</a>), a postdoctoral fellow with the Technology &amp; Social Change project at the Harvard Kennedy School's Shorenstein Center, and a senior researcher with the Tech, Law &amp; Security Program at American University Washington College of Law. Jenna is also a documentary photographer and producer with a background in community media and factual streaming.</p><p><strong>Further reading</strong></p><ul><li><a href="https://itforchange.net/sites/default/files/add/Governing%20Computational%20Infrastructure%20for%20Strong%20and%20Just%20AI%20Economies.pdf">Governing Computational Infrastructure for Strong and Just AI Economies</a>, co-authored by Boxi Wu</li><li><a href="https://emmlab.info/Resources_page/Data%20Center%20Fights_digital.pdf">Getting into Fights with Data Centres</a> by Anne Pasek</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week we are continuing our AI &amp; Environment series with an episode about a key piece of AI infrastructure: data centres. With us this week are Boxi Wu and Jenna Ruddock to explain how data centres are a gruesomely sharp double-edged sword.</p><p>They contribute to huge amounts of environmental degradation via local water and energy consumption, and impact the health of surrounding communities with incessant noise pollution. Data centres are also used as a political springboard for global leaders, where the expansion of AI infrastructure is seen as being synonymous with progress and economic growth.</p><p>Boxi and Jenna talk us through the various community concerns that come with data centre development, and the kind of pushback we’re seeing in the UK and the US right now.</p><p><a href="https://www.oii.ox.ac.uk/people/profiles/boxi-wu/">Boxi Wu</a> is a DPhil researcher at the Oxford Internet Institute and a Research Policy Consultant with the OECD’s AI Policy Observatory. Their research focuses on the politics of AI infrastructure within the context of increasing global inequality and the current climate crisis. Prior to returning to academia, Boxi worked in AI ethics, technology consulting and policy research. Most recently, they worked in AI Ethics &amp; Safety at Google DeepMind where they specialised in the ethics of LLMs and led the responsible release of frontier AI models including the initially released Gemini models.</p><p>Jenna Ruddock is a researcher and advocate working at the intersections of law, technology, media, and environmental justice. Currently, she is policy counsel at Free Press, where she focuses on digital civil rights, surveillance, privacy, and media infrastructures. She has been a visiting fellow at the University of Amsterdam's critical infrastructure lab (<a href="http://criticalinfralab.net/">criticalinfralab.net</a>), a postdoctoral fellow with the Technology &amp; Social Change project at the Harvard Kennedy School's Shorenstein Center, and a senior researcher with the Tech, Law &amp; Security Program at American University Washington College of Law. Jenna is also a documentary photographer and producer with a background in community media and factual streaming.</p><p><strong>Further reading</strong></p><ul><li><a href="https://itforchange.net/sites/default/files/add/Governing%20Computational%20Infrastructure%20for%20Strong%20and%20Just%20AI%20Economies.pdf">Governing Computational Infrastructure for Strong and Just AI Economies</a>, co-authored by Boxi Wu</li><li><a href="https://emmlab.info/Resources_page/Data%20Center%20Fights_digital.pdf">Getting into Fights with Data Centres</a> by Anne Pasek</li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 18 Oct 2024 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/e46a1256/b81a17a0.mp3" length="45409811" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>1890</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week we are continuing our AI &amp; Environment series with an episode about a key piece of AI infrastructure: data centres. With us this week are Boxi Wu and Jenna Ruddock to explain how data centres are a gruesomely sharp double-edged sword.</p><p>They contribute to huge amounts of environmental degradation via local water and energy consumption, and impact the health of surrounding communities with incessant noise pollution. Data centres are also used as a political springboard for global leaders, where the expansion of AI infrastructure is seen as being synonymous with progress and economic growth.</p><p>Boxi and Jenna talk us through the various community concerns that come with data centre development, and the kind of pushback we’re seeing in the UK and the US right now.</p><p><a href="https://www.oii.ox.ac.uk/people/profiles/boxi-wu/">Boxi Wu</a> is a DPhil researcher at the Oxford Internet Institute and a Research Policy Consultant with the OECD’s AI Policy Observatory. Their research focuses on the politics of AI infrastructure within the context of increasing global inequality and the current climate crisis. Prior to returning to academia, Boxi worked in AI ethics, technology consulting and policy research. Most recently, they worked in AI Ethics &amp; Safety at Google DeepMind where they specialised in the ethics of LLMs and led the responsible release of frontier AI models including the initially released Gemini models.</p><p>Jenna Ruddock is a researcher and advocate working at the intersections of law, technology, media, and environmental justice. Currently, she is policy counsel at Free Press, where she focuses on digital civil rights, surveillance, privacy, and media infrastructures. She has been a visiting fellow at the University of Amsterdam's critical infrastructure lab (<a href="http://criticalinfralab.net/">criticalinfralab.net</a>), a postdoctoral fellow with the Technology &amp; Social Change project at the Harvard Kennedy School's Shorenstein Center, and a senior researcher with the Tech, Law &amp; Security Program at American University Washington College of Law. Jenna is also a documentary photographer and producer with a background in community media and factual streaming.</p><p><strong>Further reading</strong></p><ul><li><a href="https://itforchange.net/sites/default/files/add/Governing%20Computational%20Infrastructure%20for%20Strong%20and%20Just%20AI%20Economies.pdf">Governing Computational Infrastructure for Strong and Just AI Economies</a>, co-authored by Boxi Wu</li><li><a href="https://emmlab.info/Resources_page/Data%20Center%20Fights_digital.pdf">Getting into Fights with Data Centres</a> by Anne Pasek</li></ul>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Net 0++: Microsoft’s greenwashing w/ Holly Alpine</title>
      <itunes:episode>21</itunes:episode>
      <podcast:episode>21</podcast:episode>
      <itunes:title>Net 0++: Microsoft’s greenwashing w/ Holly Alpine</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f7144485-9b11-4b6f-86ea-af9a74801c47</guid>
      <link>https://share.transistor.fm/s/60cc86fc</link>
      <description>
        <![CDATA[<p>This week we’re kicking off a series about AI &amp; the environment. We’re starting with Holly Alpine, who just recently left Microsoft after starting and growing an internal sustainability programme over a decade.</p><p>Holly’s goal was pretty simple: she wanted Microsoft to honour the sustainability commitments that they had set for themselves. The internal support she had fostered for sustainability initiatives did not match up with Microsoft’s actions — they continued to work with fossil fuel companies even though doing so was at odds with their plans to achieve net 0.</p><p>Listen to learn about what it’s like approaching this kind of huge systemic challenge with good faith, and trying to make change happen from the inside.</p><p><a href="https://www.linkedin.com/in/hollyalpine">Holly Alpine</a> is a dedicated leader in sustainability and environmental advocacy, having spent over a decade at Microsoft pioneering and leading multiple global initiatives. As the founder and head of Microsoft's Community Environmental Sustainability program, Holly directed substantial investments into community-based, nature-driven solutions, impacting over 45 global communities in Microsoft’s global datacenter footprint, with measurable improvements to ecosystem health, social equity, and human well-being.</p><p>Currently, Holly continues her environmental leadership as a Board member of both American Forests and Zero Waste Washington, while staying active in outdoor sports as a plant-based athlete who enjoys rock climbing, mountain biking, ski mountaineering, and running mountain ultramarathons.</p><p><strong>Further Reading:</strong></p><p><a href="https://www.theatlantic.com/technology/archive/2024/09/microsoft-ai-oil-contracts/679804/">Microsoft’s Hypocrisy on AI</a></p><p><a href="https://www.npr.org/programs/ted-radio-hour/1250016713/our-tech-has-a-climate-problem-how-we-solve-it">Our tech has a climate problem: How we solve it</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week we’re kicking off a series about AI &amp; the environment. We’re starting with Holly Alpine, who just recently left Microsoft after starting and growing an internal sustainability programme over a decade.</p><p>Holly’s goal was pretty simple: she wanted Microsoft to honour the sustainability commitments that they had set for themselves. The internal support she had fostered for sustainability initiatives did not match up with Microsoft’s actions — they continued to work with fossil fuel companies even though doing so was at odds with their plans to achieve net 0.</p><p>Listen to learn about what it’s like approaching this kind of huge systemic challenge with good faith, and trying to make change happen from the inside.</p><p><a href="https://www.linkedin.com/in/hollyalpine">Holly Alpine</a> is a dedicated leader in sustainability and environmental advocacy, having spent over a decade at Microsoft pioneering and leading multiple global initiatives. As the founder and head of Microsoft's Community Environmental Sustainability program, Holly directed substantial investments into community-based, nature-driven solutions, impacting over 45 global communities in Microsoft’s global datacenter footprint, with measurable improvements to ecosystem health, social equity, and human well-being.</p><p>Currently, Holly continues her environmental leadership as a Board member of both American Forests and Zero Waste Washington, while staying active in outdoor sports as a plant-based athlete who enjoys rock climbing, mountain biking, ski mountaineering, and running mountain ultramarathons.</p><p><strong>Further Reading:</strong></p><p><a href="https://www.theatlantic.com/technology/archive/2024/09/microsoft-ai-oil-contracts/679804/">Microsoft’s Hypocrisy on AI</a></p><p><a href="https://www.npr.org/programs/ted-radio-hour/1250016713/our-tech-has-a-climate-problem-how-we-solve-it">Our tech has a climate problem: How we solve it</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 11 Oct 2024 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/60cc86fc/6c348989.mp3" length="60694056" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2527</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week we’re kicking off a series about AI &amp; the environment. We’re starting with Holly Alpine, who just recently left Microsoft after starting and growing an internal sustainability programme over a decade.</p><p>Holly’s goal was pretty simple: she wanted Microsoft to honour the sustainability commitments that they had set for themselves. The internal support she had fostered for sustainability initiatives did not match up with Microsoft’s actions — they continued to work with fossil fuel companies even though doing so was at odds with their plans to achieve net 0.</p><p>Listen to learn about what it’s like approaching this kind of huge systemic challenge with good faith, and trying to make change happen from the inside.</p><p><a href="https://www.linkedin.com/in/hollyalpine">Holly Alpine</a> is a dedicated leader in sustainability and environmental advocacy, having spent over a decade at Microsoft pioneering and leading multiple global initiatives. As the founder and head of Microsoft's Community Environmental Sustainability program, Holly directed substantial investments into community-based, nature-driven solutions, impacting over 45 global communities in Microsoft’s global datacenter footprint, with measurable improvements to ecosystem health, social equity, and human well-being.</p><p>Currently, Holly continues her environmental leadership as a Board member of both American Forests and Zero Waste Washington, while staying active in outdoor sports as a plant-based athlete who enjoys rock climbing, mountain biking, ski mountaineering, and running mountain ultramarathons.</p><p><strong>Further Reading:</strong></p><p><a href="https://www.theatlantic.com/technology/archive/2024/09/microsoft-ai-oil-contracts/679804/">Microsoft’s Hypocrisy on AI</a></p><p><a href="https://www.npr.org/programs/ted-radio-hour/1250016713/our-tech-has-a-climate-problem-how-we-solve-it">Our tech has a climate problem: How we solve it</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Chasing Away Sidewalk Labs w/ Bianca Wylie</title>
      <itunes:episode>20</itunes:episode>
      <podcast:episode>20</podcast:episode>
      <itunes:title>Chasing Away Sidewalk Labs w/ Bianca Wylie</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">775c5fce-c025-42fc-a7c9-9ae154be1853</guid>
      <link>https://share.transistor.fm/s/3694d739</link>
      <description>
        <![CDATA[<p>In 2017 Google’s urban planning arm Sidewalk Labs came into Toronto and said “we’re going to turn this into a smart city”.</p><p>Our guest Bianca Wylie was one of the people who stood up and said “okay but… who asked for this?”</p><p>This is a story about how a large tech firm came into a community with big promises, and then left with its tail between its legs. In the episode Alix and Bianca discuss the complexities of government procurement of tech, and how attractive corporate solutions look when you’re so riddled with austerity.</p><p><a href="https://biancawylie.com">Bianca Wylie</a> is a writer with a dual background in technology and public engagement.  She is a partner at Digital Public and a co-founder of Tech Reset Canada. She worked for several years in the tech sector in operations, infrastructure, corporate training, and product management. Then, as a professional facilitator, she spent several years co-designing, delivering and supporting public consultation processes for various governments and government agencies. She founded the Open Data Institute Toronto in 2014 and co-founded Civic Tech Toronto in 2015.</p><p><strong>Further Reading:</strong></p><p><a href="https://datasociety.net/wp-content/uploads/2024/04/Keywords_Counterpublics_Bui_Wylie_04242024.pdf">A Counterpublic Analysis of Sidewalk Toronto</a></p><p><a href="https://biancawylie.medium.com/">Bianca Wylie on Medium</a></p><p><a href="https://www.bostonreview.net/articles/bianca-wylie-sidewalk-labs-toronto/">In Toronto, Google’s Attempt to Privatize Government Fails—For Now</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In 2017 Google’s urban planning arm Sidewalk Labs came into Toronto and said “we’re going to turn this into a smart city”.</p><p>Our guest Bianca Wylie was one of the people who stood up and said “okay but… who asked for this?”</p><p>This is a story about how a large tech firm came into a community with big promises, and then left with its tail between its legs. In the episode Alix and Bianca discuss the complexities of government procurement of tech, and how attractive corporate solutions look when you’re so riddled with austerity.</p><p><a href="https://biancawylie.com">Bianca Wylie</a> is a writer with a dual background in technology and public engagement.  She is a partner at Digital Public and a co-founder of Tech Reset Canada. She worked for several years in the tech sector in operations, infrastructure, corporate training, and product management. Then, as a professional facilitator, she spent several years co-designing, delivering and supporting public consultation processes for various governments and government agencies. She founded the Open Data Institute Toronto in 2014 and co-founded Civic Tech Toronto in 2015.</p><p><strong>Further Reading:</strong></p><p><a href="https://datasociety.net/wp-content/uploads/2024/04/Keywords_Counterpublics_Bui_Wylie_04242024.pdf">A Counterpublic Analysis of Sidewalk Toronto</a></p><p><a href="https://biancawylie.medium.com/">Bianca Wylie on Medium</a></p><p><a href="https://www.bostonreview.net/articles/bianca-wylie-sidewalk-labs-toronto/">In Toronto, Google’s Attempt to Privatize Government Fails—For Now</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 04 Oct 2024 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/3694d739/5b5cc0fb.mp3" length="71551026" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2979</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In 2017 Google’s urban planning arm Sidewalk Labs came into Toronto and said “we’re going to turn this into a smart city”.</p><p>Our guest Bianca Wylie was one of the people who stood up and said “okay but… who asked for this?”</p><p>This is a story about how a large tech firm came into a community with big promises, and then left with its tail between its legs. In the episode Alix and Bianca discuss the complexities of government procurement of tech, and how attractive corporate solutions look when you’re so riddled with austerity.</p><p><a href="https://biancawylie.com">Bianca Wylie</a> is a writer with a dual background in technology and public engagement.  She is a partner at Digital Public and a co-founder of Tech Reset Canada. She worked for several years in the tech sector in operations, infrastructure, corporate training, and product management. Then, as a professional facilitator, she spent several years co-designing, delivering and supporting public consultation processes for various governments and government agencies. She founded the Open Data Institute Toronto in 2014 and co-founded Civic Tech Toronto in 2015.</p><p><strong>Further Reading:</strong></p><p><a href="https://datasociety.net/wp-content/uploads/2024/04/Keywords_Counterpublics_Bui_Wylie_04242024.pdf">A Counterpublic Analysis of Sidewalk Toronto</a></p><p><a href="https://biancawylie.medium.com/">Bianca Wylie on Medium</a></p><p><a href="https://www.bostonreview.net/articles/bianca-wylie-sidewalk-labs-toronto/">In Toronto, Google’s Attempt to Privatize Government Fails—For Now</a></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Will Newsom Veto the AI Safety Bill? w/ Teri Olle</title>
      <itunes:episode>19</itunes:episode>
      <podcast:episode>19</podcast:episode>
      <itunes:title>Will Newsom Veto the AI Safety Bill? w/ Teri Olle</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">751a34f0-e305-44fe-8325-f26d5045b0a1</guid>
      <link>https://share.transistor.fm/s/98ca6ce6</link>
      <description>
        <![CDATA[<p>What if we could have a public library for compute? But is… more compute really what we want right now?</p><p>This week Alix interviewed Teri Olle from the Economic Security Project, a co-sponsor of the California AI safety bill (SB 1047). The bill has been making the rounds in the news because it would force AI companies to do safety checks on their models before releasing them to the public — which is seen as uh, ‘controversial’, to those in the innovation space.</p><p>But Teri had a hand in a lesser known part of the bill: the construction of CalCompute, a state owned public cloud cluster for resource-intensive AI development. This would mean public access to the compute power needed to train state of the art AI models — finally giving researchers and plucky start ups access to something otherwise locked inside a corporate walled garden.</p><p>Teri Olle is the California Campaign Director for Economic Security Project Action. Beginning her career as an attorney, Teri soon moved into policy and issue advocacy, working on state and local efforts to ban toxic chemicals and pesticides, decrease food insecurity and hunger, increase gender representation in politics. She is a founding member of a political action committee dedicated to inserting parent voice into local politics and served as the president of the board of Emerge California. She lives in San Francisco with her husband and two daughters.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>What if we could have a public library for compute? But is… more compute really what we want right now?</p><p>This week Alix interviewed Teri Olle from the Economic Security Project, a co-sponsor of the California AI safety bill (SB 1047). The bill has been making the rounds in the news because it would force AI companies to do safety checks on their models before releasing them to the public — which is seen as uh, ‘controversial’, to those in the innovation space.</p><p>But Teri had a hand in a lesser known part of the bill: the construction of CalCompute, a state owned public cloud cluster for resource-intensive AI development. This would mean public access to the compute power needed to train state of the art AI models — finally giving researchers and plucky start ups access to something otherwise locked inside a corporate walled garden.</p><p>Teri Olle is the California Campaign Director for Economic Security Project Action. Beginning her career as an attorney, Teri soon moved into policy and issue advocacy, working on state and local efforts to ban toxic chemicals and pesticides, decrease food insecurity and hunger, increase gender representation in politics. She is a founding member of a political action committee dedicated to inserting parent voice into local politics and served as the president of the board of Emerge California. She lives in San Francisco with her husband and two daughters.</p>]]>
      </content:encoded>
      <pubDate>Fri, 27 Sep 2024 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/98ca6ce6/a88d0874.mp3" length="35610937" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>1482</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>What if we could have a public library for compute? But is… more compute really what we want right now?</p><p>This week Alix interviewed Teri Olle from the Economic Security Project, a co-sponsor of the California AI safety bill (SB 1047). The bill has been making the rounds in the news because it would force AI companies to do safety checks on their models before releasing them to the public — which is seen as uh, ‘controversial’, to those in the innovation space.</p><p>But Teri had a hand in a lesser known part of the bill: the construction of CalCompute, a state owned public cloud cluster for resource-intensive AI development. This would mean public access to the compute power needed to train state of the art AI models — finally giving researchers and plucky start ups access to something otherwise locked inside a corporate walled garden.</p><p>Teri Olle is the California Campaign Director for Economic Security Project Action. Beginning her career as an attorney, Teri soon moved into policy and issue advocacy, working on state and local efforts to ban toxic chemicals and pesticides, decrease food insecurity and hunger, increase gender representation in politics. She is a founding member of a political action committee dedicated to inserting parent voice into local politics and served as the president of the board of Emerge California. She lives in San Francisco with her husband and two daughters.</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>The stories we tell ourselves about AI</title>
      <itunes:episode>18</itunes:episode>
      <podcast:episode>18</podcast:episode>
      <itunes:title>The stories we tell ourselves about AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e62b6623-0b0d-4cb9-a5be-8a2331e017e7</guid>
      <link>https://share.transistor.fm/s/7d22deb9</link>
      <description>
        <![CDATA[<p><strong><em>Applications for our second cohort of </em></strong><a href="https://www.saysmaybe.com/media-mastery"><strong><em>Media Mastery for New AI Protagonists</em></strong></a><strong><em> are now open! </em></strong><em>Join this 5-week program to level up your media impact alongside a dynamic community of emerging experts in AI politics and power—at no cost to you. In this episode, we chat with Daniel Stone, a participant from our first cohort, about his work. </em><a href="https://www.saysmaybe.com/media-mastery"><strong><em>Apply</em></strong></a><strong><em> by Sunday, September 29th!</em></strong></p><p><br>The adoption of new technologies is driven by stories. A story is a shortcut to understanding something complex. Narratives can lock us into a set of options that are…terrible. The kicker is that narratives are hard to detect and even harder to influence.</p><p>But how reliable are our narrators? And how can we use story as strategy?</p><p>The good news is that experts are working to unravel the narratives around AI. All so that folks with public interest in mind can change the game.</p><p>This week Alix sat down with three researchers looking at three AI narrative questions. She spoke to Hanna Barakat about how the New York Times reports on AI; John Tanner, who scraped and analysed huge amounts of YouTube videos to find narrative patterns; and Daniel Stone, who studied and deconstructed metaphors that power collective understanding about AI.</p><p>In this ep we ask:</p><ul><li>What are the stories we tell ourselves about AI? And why do we let industry pick them?</li><li>How do these narratives change what is politically possible?</li><li>What can public interest organisations and advocates do to change the narrative game?</li></ul><p><a href="https://www.hbarakat.com/">Hanna Barakat</a> is a research analyst for Computer Says Maybe, working at the intersection of emerging technologies and complex systems design. She graduated from Brown University in 2022 with honors in International Development Studies and a focus in Digital Media Studies.</p><p><a href="https://x.com/tannerjc">Jonathan Tanner</a> founded Rootcause after more than fifteen years working in senior communications roles for high-profile politicians, CEOs, philanthropists and public thinkers across the world. In this time he has worked across more than a dozen countries running diverse teams whilst writing keynote speeches, securing front page headlines, delivering world-first social media moments and helping to secure meaningful changes to public policy.</p><p><a href="https://uk.linkedin.com/in/danielcstone?original_referer=https%3A%2F%2Fwww.google.com%2F">Daniel Stone</a> is currently undertaking research with Cambridge University’s Centre for Future Intelligence and is the Executive Director of Diffusion.Au. He is a Policy Fellow with the Chifley Research Centre and a Policy Associate at the Centre for Responsible Technology Australia.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><strong><em>Applications for our second cohort of </em></strong><a href="https://www.saysmaybe.com/media-mastery"><strong><em>Media Mastery for New AI Protagonists</em></strong></a><strong><em> are now open! </em></strong><em>Join this 5-week program to level up your media impact alongside a dynamic community of emerging experts in AI politics and power—at no cost to you. In this episode, we chat with Daniel Stone, a participant from our first cohort, about his work. </em><a href="https://www.saysmaybe.com/media-mastery"><strong><em>Apply</em></strong></a><strong><em> by Sunday, September 29th!</em></strong></p><p><br>The adoption of new technologies is driven by stories. A story is a shortcut to understanding something complex. Narratives can lock us into a set of options that are…terrible. The kicker is that narratives are hard to detect and even harder to influence.</p><p>But how reliable are our narrators? And how can we use story as strategy?</p><p>The good news is that experts are working to unravel the narratives around AI. All so that folks with public interest in mind can change the game.</p><p>This week Alix sat down with three researchers looking at three AI narrative questions. She spoke to Hanna Barakat about how the New York Times reports on AI; John Tanner, who scraped and analysed huge amounts of YouTube videos to find narrative patterns; and Daniel Stone, who studied and deconstructed metaphors that power collective understanding about AI.</p><p>In this ep we ask:</p><ul><li>What are the stories we tell ourselves about AI? And why do we let industry pick them?</li><li>How do these narratives change what is politically possible?</li><li>What can public interest organisations and advocates do to change the narrative game?</li></ul><p><a href="https://www.hbarakat.com/">Hanna Barakat</a> is a research analyst for Computer Says Maybe, working at the intersection of emerging technologies and complex systems design. She graduated from Brown University in 2022 with honors in International Development Studies and a focus in Digital Media Studies.</p><p><a href="https://x.com/tannerjc">Jonathan Tanner</a> founded Rootcause after more than fifteen years working in senior communications roles for high-profile politicians, CEOs, philanthropists and public thinkers across the world. In this time he has worked across more than a dozen countries running diverse teams whilst writing keynote speeches, securing front page headlines, delivering world-first social media moments and helping to secure meaningful changes to public policy.</p><p><a href="https://uk.linkedin.com/in/danielcstone?original_referer=https%3A%2F%2Fwww.google.com%2F">Daniel Stone</a> is currently undertaking research with Cambridge University’s Centre for Future Intelligence and is the Executive Director of Diffusion.Au. He is a Policy Fellow with the Chifley Research Centre and a Policy Associate at the Centre for Responsible Technology Australia.</p>]]>
      </content:encoded>
      <pubDate>Fri, 20 Sep 2024 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/7d22deb9/69af54cf.mp3" length="54115653" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2253</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><strong><em>Applications for our second cohort of </em></strong><a href="https://www.saysmaybe.com/media-mastery"><strong><em>Media Mastery for New AI Protagonists</em></strong></a><strong><em> are now open! </em></strong><em>Join this 5-week program to level up your media impact alongside a dynamic community of emerging experts in AI politics and power—at no cost to you. In this episode, we chat with Daniel Stone, a participant from our first cohort, about his work. </em><a href="https://www.saysmaybe.com/media-mastery"><strong><em>Apply</em></strong></a><strong><em> by Sunday, September 29th!</em></strong></p><p><br>The adoption of new technologies is driven by stories. A story is a shortcut to understanding something complex. Narratives can lock us into a set of options that are…terrible. The kicker is that narratives are hard to detect and even harder to influence.</p><p>But how reliable are our narrators? And how can we use story as strategy?</p><p>The good news is that experts are working to unravel the narratives around AI. All so that folks with public interest in mind can change the game.</p><p>This week Alix sat down with three researchers looking at three AI narrative questions. She spoke to Hanna Barakat about how the New York Times reports on AI; John Tanner, who scraped and analysed huge amounts of YouTube videos to find narrative patterns; and Daniel Stone, who studied and deconstructed metaphors that power collective understanding about AI.</p><p>In this ep we ask:</p><ul><li>What are the stories we tell ourselves about AI? And why do we let industry pick them?</li><li>How do these narratives change what is politically possible?</li><li>What can public interest organisations and advocates do to change the narrative game?</li></ul><p><a href="https://www.hbarakat.com/">Hanna Barakat</a> is a research analyst for Computer Says Maybe, working at the intersection of emerging technologies and complex systems design. She graduated from Brown University in 2022 with honors in International Development Studies and a focus in Digital Media Studies.</p><p><a href="https://x.com/tannerjc">Jonathan Tanner</a> founded Rootcause after more than fifteen years working in senior communications roles for high-profile politicians, CEOs, philanthropists and public thinkers across the world. In this time he has worked across more than a dozen countries running diverse teams whilst writing keynote speeches, securing front page headlines, delivering world-first social media moments and helping to secure meaningful changes to public policy.</p><p><a href="https://uk.linkedin.com/in/danielcstone?original_referer=https%3A%2F%2Fwww.google.com%2F">Daniel Stone</a> is currently undertaking research with Cambridge University’s Centre for Future Intelligence and is the Executive Director of Diffusion.Au. He is a Policy Fellow with the Chifley Research Centre and a Policy Associate at the Centre for Responsible Technology Australia.</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Bridging The Divide w/ Issie Lapowsky</title>
      <itunes:episode>16</itunes:episode>
      <podcast:episode>16</podcast:episode>
      <itunes:title>Bridging The Divide w/ Issie Lapowsky</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4a5761ff-bbbb-46ae-9e5e-f001d016f46b</guid>
      <link>https://share.transistor.fm/s/6ebdc7c4</link>
      <description>
        <![CDATA[<p>There are oceans of research papers digging into the various harms of online platforms. Researchers are asking urgent questions such as how hate speech and misinformation has an effect on our information environment, and our democracy.</p><p>But how does this research find it’s way to the media, policymakers, advocacy groups, or even tech companies themselves?</p><p>To help us answer this, Alix is joined this week by Issie Lapowsky, who recently authored <em>Bridging The Divide: Translating Research on Digital Media into Policy and Practice</em> — a report about how research reaches these four groups, and what they do with it. This episode also features John Sands from Knight Foundation, who commissioned this report.</p><p>Further reading:</p><ul><li><a href="https://knightfoundation.org/features/bridging-the-divide-translating-research-on-digital-media-into-policy-and-practice/">Bridging The Divide by Issie Lapowsky</a></li><li><a href="https://knightfoundation.org/">Knight Foundation</a></li></ul><p><a href="https://www.linkedin.com/in/issielapowsky">Issie Lapowsky</a> is a journalist covering the intersection between tech, politics and national affairs. She has been published in WIRED, Protocol, The New York Times, and Fast Company.</p><p><a href="https://knightfoundation.org/employee/john-sands/">John Sands</a> is Senior Director of Media and Democracy at Knight Foundation. Since joining Knight Foundation in 2019, he has led more than $100 million in grant making to support independent scholarship and policy research on information and technology in the context of our democracy.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>There are oceans of research papers digging into the various harms of online platforms. Researchers are asking urgent questions such as how hate speech and misinformation has an effect on our information environment, and our democracy.</p><p>But how does this research find it’s way to the media, policymakers, advocacy groups, or even tech companies themselves?</p><p>To help us answer this, Alix is joined this week by Issie Lapowsky, who recently authored <em>Bridging The Divide: Translating Research on Digital Media into Policy and Practice</em> — a report about how research reaches these four groups, and what they do with it. This episode also features John Sands from Knight Foundation, who commissioned this report.</p><p>Further reading:</p><ul><li><a href="https://knightfoundation.org/features/bridging-the-divide-translating-research-on-digital-media-into-policy-and-practice/">Bridging The Divide by Issie Lapowsky</a></li><li><a href="https://knightfoundation.org/">Knight Foundation</a></li></ul><p><a href="https://www.linkedin.com/in/issielapowsky">Issie Lapowsky</a> is a journalist covering the intersection between tech, politics and national affairs. She has been published in WIRED, Protocol, The New York Times, and Fast Company.</p><p><a href="https://knightfoundation.org/employee/john-sands/">John Sands</a> is Senior Director of Media and Democracy at Knight Foundation. Since joining Knight Foundation in 2019, he has led more than $100 million in grant making to support independent scholarship and policy research on information and technology in the context of our democracy.</p>]]>
      </content:encoded>
      <pubDate>Fri, 13 Sep 2024 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/6ebdc7c4/5c8a511d.mp3" length="55330338" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2303</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>There are oceans of research papers digging into the various harms of online platforms. Researchers are asking urgent questions such as how hate speech and misinformation has an effect on our information environment, and our democracy.</p><p>But how does this research find it’s way to the media, policymakers, advocacy groups, or even tech companies themselves?</p><p>To help us answer this, Alix is joined this week by Issie Lapowsky, who recently authored <em>Bridging The Divide: Translating Research on Digital Media into Policy and Practice</em> — a report about how research reaches these four groups, and what they do with it. This episode also features John Sands from Knight Foundation, who commissioned this report.</p><p>Further reading:</p><ul><li><a href="https://knightfoundation.org/features/bridging-the-divide-translating-research-on-digital-media-into-policy-and-practice/">Bridging The Divide by Issie Lapowsky</a></li><li><a href="https://knightfoundation.org/">Knight Foundation</a></li></ul><p><a href="https://www.linkedin.com/in/issielapowsky">Issie Lapowsky</a> is a journalist covering the intersection between tech, politics and national affairs. She has been published in WIRED, Protocol, The New York Times, and Fast Company.</p><p><a href="https://knightfoundation.org/employee/john-sands/">John Sands</a> is Senior Director of Media and Democracy at Knight Foundation. Since joining Knight Foundation in 2019, he has led more than $100 million in grant making to support independent scholarship and policy research on information and technology in the context of our democracy.</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Why was the CEO of Telegram just arrested? w/ Mallory Knodel</title>
      <itunes:episode>15</itunes:episode>
      <podcast:episode>15</podcast:episode>
      <itunes:title>Why was the CEO of Telegram just arrested? w/ Mallory Knodel</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f0b25a36-14b5-4e35-9d9c-821261790a06</guid>
      <link>https://share.transistor.fm/s/68b37a93</link>
      <description>
        <![CDATA[<p>Last week, CEO of Telegram Pavel Durov landed in France and was immediately detained. The details of his arrest are still emerging; he is being charged for being complicit in illegal activities happening on the platform, including the spread of CSAM.</p><p>Durov’s lawyer has referred to these charges as “absurd” — because the head of a social media company cannot be held responsible for criminal activity on the platform. That might be true in the US but does that hold up in France?</p><p>This week Alix is joined by Mallory Knodel to talk us through what happened:</p><ul><li>What are the implications of France making this move, and why now?</li><li>How has Telegram positioned themselves as the most safe and secure messaging platform when they don’t even use the same encryption standards as WhatsApp?</li><li>How Telegram has managed to get away with being uncooperative with various governments — or have they?</li></ul><p><a href="https://www.linkedin.com/in/malloryknodel/">Mallory Knodel</a> is The Center for Democracy &amp; Technology’s Chief Technology Officer. She is also a co-chair of the Human Rights and Protocol Considerations research group of the Internet Research Task Force and a chairing advisor on cybersecurity and AI to the Freedom Online Coalition.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Last week, CEO of Telegram Pavel Durov landed in France and was immediately detained. The details of his arrest are still emerging; he is being charged for being complicit in illegal activities happening on the platform, including the spread of CSAM.</p><p>Durov’s lawyer has referred to these charges as “absurd” — because the head of a social media company cannot be held responsible for criminal activity on the platform. That might be true in the US but does that hold up in France?</p><p>This week Alix is joined by Mallory Knodel to talk us through what happened:</p><ul><li>What are the implications of France making this move, and why now?</li><li>How has Telegram positioned themselves as the most safe and secure messaging platform when they don’t even use the same encryption standards as WhatsApp?</li><li>How Telegram has managed to get away with being uncooperative with various governments — or have they?</li></ul><p><a href="https://www.linkedin.com/in/malloryknodel/">Mallory Knodel</a> is The Center for Democracy &amp; Technology’s Chief Technology Officer. She is also a co-chair of the Human Rights and Protocol Considerations research group of the Internet Research Task Force and a chairing advisor on cybersecurity and AI to the Freedom Online Coalition.</p>]]>
      </content:encoded>
      <pubDate>Fri, 06 Sep 2024 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/68b37a93/c19b9d34.mp3" length="64256327" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2675</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Last week, CEO of Telegram Pavel Durov landed in France and was immediately detained. The details of his arrest are still emerging; he is being charged for being complicit in illegal activities happening on the platform, including the spread of CSAM.</p><p>Durov’s lawyer has referred to these charges as “absurd” — because the head of a social media company cannot be held responsible for criminal activity on the platform. That might be true in the US but does that hold up in France?</p><p>This week Alix is joined by Mallory Knodel to talk us through what happened:</p><ul><li>What are the implications of France making this move, and why now?</li><li>How has Telegram positioned themselves as the most safe and secure messaging platform when they don’t even use the same encryption standards as WhatsApp?</li><li>How Telegram has managed to get away with being uncooperative with various governments — or have they?</li></ul><p><a href="https://www.linkedin.com/in/malloryknodel/">Mallory Knodel</a> is The Center for Democracy &amp; Technology’s Chief Technology Officer. She is also a co-chair of the Human Rights and Protocol Considerations research group of the Internet Research Task Force and a chairing advisor on cybersecurity and AI to the Freedom Online Coalition.</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Exhibit X: What did we learn?</title>
      <itunes:episode>14</itunes:episode>
      <podcast:episode>14</podcast:episode>
      <itunes:title>Exhibit X: What did we learn?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5f9b858e-73be-4ee1-b7fa-863a21ac3e5e</guid>
      <link>https://share.transistor.fm/s/6a49e67f</link>
      <description>
        <![CDATA[<p>That’s the END of Exhibit X folks; if you’ve been following along, congratulations on choosing to become smarter. If not that’s okay, consider this episode a delicious teaser for the series.</p><p>In this episode Alix and Prathm engage their large wet brains and pull out the meatiest insights and learnings from the last five episodes. This series has been a delightful intellectual expedition into big tech litigation, knowledge creation, and online speech — if you’re a nerd for any of those things, it would be irresponsible for you to ignore this.</p><p>Thank you for listening; we hope to do more deep explorations like this in the future!</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>That’s the END of Exhibit X folks; if you’ve been following along, congratulations on choosing to become smarter. If not that’s okay, consider this episode a delicious teaser for the series.</p><p>In this episode Alix and Prathm engage their large wet brains and pull out the meatiest insights and learnings from the last five episodes. This series has been a delightful intellectual expedition into big tech litigation, knowledge creation, and online speech — if you’re a nerd for any of those things, it would be irresponsible for you to ignore this.</p><p>Thank you for listening; we hope to do more deep explorations like this in the future!</p>]]>
      </content:encoded>
      <pubDate>Fri, 30 Aug 2024 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/6a49e67f/326fd95b.mp3" length="30162066" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2150</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>That’s the END of Exhibit X folks; if you’ve been following along, congratulations on choosing to become smarter. If not that’s okay, consider this episode a delicious teaser for the series.</p><p>In this episode Alix and Prathm engage their large wet brains and pull out the meatiest insights and learnings from the last five episodes. This series has been a delightful intellectual expedition into big tech litigation, knowledge creation, and online speech — if you’re a nerd for any of those things, it would be irresponsible for you to ignore this.</p><p>Thank you for listening; we hope to do more deep explorations like this in the future!</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Exhibit X: The Community</title>
      <itunes:episode>13</itunes:episode>
      <podcast:episode>13</podcast:episode>
      <itunes:title>Exhibit X: The Community</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e5d95e7a-c867-4b78-b402-f17ebbc699e7</guid>
      <link>https://share.transistor.fm/s/fd7b0694</link>
      <description>
        <![CDATA[<p>What makes an expert witness? How does a socio-technical researcher become one? Now that we’re the end of this miniseries, we might finally be ready to answer these questions…</p><p>In the fifth instalment of Exhibit X, civic tech acrobat Elizabeth Eagen shares her pithy insights on how researchers of emerging technologies are starting to interface with litigators and regulators.</p><p>The questions we explore this week:</p><ul><li>When do the expertise of social scientists become ‘good’ enough to stand up in court — and who gets to decide that?</li><li>How can the traditionally glacial system of courts and legislators keep pace with the shifting whims of technology companies?</li><li>Litigators want social scientists to get on the stand and say ‘X caused Y’ without a shadow of a doubt — but what social scientist would do that?</li></ul><p><em>Elizabeth Eagen is Deputy Director of the Citizens and Technology Lab at Cornell University, which works with communities to study the effects of technology on society and test ideas for changing digital spaces to better serve the public interest. She was a 2022-23 Practitioner Fellow at the Digital Civil Society Lab at Stanford University, and serves as a board member at a number of nonprofit technology organizations.</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>What makes an expert witness? How does a socio-technical researcher become one? Now that we’re the end of this miniseries, we might finally be ready to answer these questions…</p><p>In the fifth instalment of Exhibit X, civic tech acrobat Elizabeth Eagen shares her pithy insights on how researchers of emerging technologies are starting to interface with litigators and regulators.</p><p>The questions we explore this week:</p><ul><li>When do the expertise of social scientists become ‘good’ enough to stand up in court — and who gets to decide that?</li><li>How can the traditionally glacial system of courts and legislators keep pace with the shifting whims of technology companies?</li><li>Litigators want social scientists to get on the stand and say ‘X caused Y’ without a shadow of a doubt — but what social scientist would do that?</li></ul><p><em>Elizabeth Eagen is Deputy Director of the Citizens and Technology Lab at Cornell University, which works with communities to study the effects of technology on society and test ideas for changing digital spaces to better serve the public interest. She was a 2022-23 Practitioner Fellow at the Digital Civil Society Lab at Stanford University, and serves as a board member at a number of nonprofit technology organizations.</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 23 Aug 2024 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/fd7b0694/d2ef36b9.mp3" length="39574947" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2822</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>What makes an expert witness? How does a socio-technical researcher become one? Now that we’re the end of this miniseries, we might finally be ready to answer these questions…</p><p>In the fifth instalment of Exhibit X, civic tech acrobat Elizabeth Eagen shares her pithy insights on how researchers of emerging technologies are starting to interface with litigators and regulators.</p><p>The questions we explore this week:</p><ul><li>When do the expertise of social scientists become ‘good’ enough to stand up in court — and who gets to decide that?</li><li>How can the traditionally glacial system of courts and legislators keep pace with the shifting whims of technology companies?</li><li>Litigators want social scientists to get on the stand and say ‘X caused Y’ without a shadow of a doubt — but what social scientist would do that?</li></ul><p><em>Elizabeth Eagen is Deputy Director of the Citizens and Technology Lab at Cornell University, which works with communities to study the effects of technology on society and test ideas for changing digital spaces to better serve the public interest. She was a 2022-23 Practitioner Fellow at the Digital Civil Society Lab at Stanford University, and serves as a board member at a number of nonprofit technology organizations.</em></p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Exhibit X: The Courts</title>
      <itunes:episode>12</itunes:episode>
      <podcast:episode>12</podcast:episode>
      <itunes:title>Exhibit X: The Courts</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">48df5a4a-2753-401d-a216-d1850b455d67</guid>
      <link>https://share.transistor.fm/s/7d013f3d</link>
      <description>
        <![CDATA[<p>Imagine: something horrible has happened and the only evidence you have is a video posted online. Can you submit it into evidence in court? Well, it’s complicated.</p><p>In <strong>part 4 of our Exhibit X series</strong>, Alix sat down with <a href="https://www.linkedin.com/in/alexakoenig/">Dr. Alexa Koenig</a> to discuss her work with the International Criminal Court. Dr. Koenig and many colleagues are supporting the court to grapple with online evidence and tackling challenges that courts face when they adapt to our digital world.</p><p>We answer questions like:</p><ul><li>How does the ICC work with social media companies to acquire evidence?</li><li>How has generative AI and synthetic media impacted evidence in courts?</li><li>When can we expect to see social scientists as expert witnesses in court?</li></ul><p>Alexa Koenig, PhD, JD, is Co-Faculty Director of the Human Rights Center , Director of HRC’s Investigations Program, and an adjunct professor at UC Berkeley School of Law, where she teaches classes that focus on the intersection of emerging technologies and human rights. She also co-teaches a class on open source investigative reporting at Berkeley Journalism. Alexa co-founded the <a href="https://www.law.berkeley.edu/research/human-rights-center/programs/technology/human-rights-investigations-lab-internships/">Human Rights Center Investigations Lab</a>, which trains students and professionals to use social media and other digital open source content to strengthen human rights research, reporting, and accountability.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Imagine: something horrible has happened and the only evidence you have is a video posted online. Can you submit it into evidence in court? Well, it’s complicated.</p><p>In <strong>part 4 of our Exhibit X series</strong>, Alix sat down with <a href="https://www.linkedin.com/in/alexakoenig/">Dr. Alexa Koenig</a> to discuss her work with the International Criminal Court. Dr. Koenig and many colleagues are supporting the court to grapple with online evidence and tackling challenges that courts face when they adapt to our digital world.</p><p>We answer questions like:</p><ul><li>How does the ICC work with social media companies to acquire evidence?</li><li>How has generative AI and synthetic media impacted evidence in courts?</li><li>When can we expect to see social scientists as expert witnesses in court?</li></ul><p>Alexa Koenig, PhD, JD, is Co-Faculty Director of the Human Rights Center , Director of HRC’s Investigations Program, and an adjunct professor at UC Berkeley School of Law, where she teaches classes that focus on the intersection of emerging technologies and human rights. She also co-teaches a class on open source investigative reporting at Berkeley Journalism. Alexa co-founded the <a href="https://www.law.berkeley.edu/research/human-rights-center/programs/technology/human-rights-investigations-lab-internships/">Human Rights Center Investigations Lab</a>, which trains students and professionals to use social media and other digital open source content to strengthen human rights research, reporting, and accountability.</p>]]>
      </content:encoded>
      <pubDate>Fri, 16 Aug 2024 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/7d013f3d/01c9e431.mp3" length="37285248" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>2659</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Imagine: something horrible has happened and the only evidence you have is a video posted online. Can you submit it into evidence in court? Well, it’s complicated.</p><p>In <strong>part 4 of our Exhibit X series</strong>, Alix sat down with <a href="https://www.linkedin.com/in/alexakoenig/">Dr. Alexa Koenig</a> to discuss her work with the International Criminal Court. Dr. Koenig and many colleagues are supporting the court to grapple with online evidence and tackling challenges that courts face when they adapt to our digital world.</p><p>We answer questions like:</p><ul><li>How does the ICC work with social media companies to acquire evidence?</li><li>How has generative AI and synthetic media impacted evidence in courts?</li><li>When can we expect to see social scientists as expert witnesses in court?</li></ul><p>Alexa Koenig, PhD, JD, is Co-Faculty Director of the Human Rights Center , Director of HRC’s Investigations Program, and an adjunct professor at UC Berkeley School of Law, where she teaches classes that focus on the intersection of emerging technologies and human rights. She also co-teaches a class on open source investigative reporting at Berkeley Journalism. Alexa co-founded the <a href="https://www.law.berkeley.edu/research/human-rights-center/programs/technology/human-rights-investigations-lab-internships/">Human Rights Center Investigations Lab</a>, which trains students and professionals to use social media and other digital open source content to strengthen human rights research, reporting, and accountability.</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Exhibit X: The Litigators</title>
      <itunes:episode>11</itunes:episode>
      <podcast:episode>11</podcast:episode>
      <itunes:title>Exhibit X: The Litigators</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c7e54893-f9bf-4e7f-81dd-d2a065414e2a</guid>
      <link>https://share.transistor.fm/s/9d68fd6c</link>
      <description>
        <![CDATA[<p>Often it feels as though the cases and lawsuits brought against big tech firms are continuously piling up, but there never seems to be any resulting justice or resolution. There are many good reasons for this, two of which are section 230 and the first amendment.</p><p>Big Tech companies will routinely invoke 230 and the first amendment to get cases against them thrown out before they can go to trial. In part 3 of Exhibit X, Meetali Jain explains how litigators have been playing 4D chess to get the courts to hold these companies accountable.</p><p>In this episode we ask…</p><ul><li>What is section 230 and how to platforms use it to their benefit?</li><li>How can we take the design decisions of a corporation <em>out</em> of the free speech bucket so that they can be held responsible for their actions?</li><li>How can we start developing more levers from transparency, beyond lawsuits and whistleblowing?</li></ul><p><em>Meetali Jain is a human rights lawyer, who founded the </em><a href="https://techjusticelaw.org/"><em>Tech Justice Law Project</em></a><em> in 2023. The Project works with a collective of legal experts, policy advocates, digital rights organizations, and technologists to ensure that legal and policy frameworks are fit for the digital age, and that online spaces are safer and more accountable.</em></p><p>This episode was hosted by Alix Dunn and Prathm Juneja</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Often it feels as though the cases and lawsuits brought against big tech firms are continuously piling up, but there never seems to be any resulting justice or resolution. There are many good reasons for this, two of which are section 230 and the first amendment.</p><p>Big Tech companies will routinely invoke 230 and the first amendment to get cases against them thrown out before they can go to trial. In part 3 of Exhibit X, Meetali Jain explains how litigators have been playing 4D chess to get the courts to hold these companies accountable.</p><p>In this episode we ask…</p><ul><li>What is section 230 and how to platforms use it to their benefit?</li><li>How can we take the design decisions of a corporation <em>out</em> of the free speech bucket so that they can be held responsible for their actions?</li><li>How can we start developing more levers from transparency, beyond lawsuits and whistleblowing?</li></ul><p><em>Meetali Jain is a human rights lawyer, who founded the </em><a href="https://techjusticelaw.org/"><em>Tech Justice Law Project</em></a><em> in 2023. The Project works with a collective of legal experts, policy advocates, digital rights organizations, and technologists to ensure that legal and policy frameworks are fit for the digital age, and that online spaces are safer and more accountable.</em></p><p>This episode was hosted by Alix Dunn and Prathm Juneja</p>]]>
      </content:encoded>
      <pubDate>Fri, 09 Aug 2024 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/9d68fd6c/c1608ca1.mp3" length="26758629" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>1907</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Often it feels as though the cases and lawsuits brought against big tech firms are continuously piling up, but there never seems to be any resulting justice or resolution. There are many good reasons for this, two of which are section 230 and the first amendment.</p><p>Big Tech companies will routinely invoke 230 and the first amendment to get cases against them thrown out before they can go to trial. In part 3 of Exhibit X, Meetali Jain explains how litigators have been playing 4D chess to get the courts to hold these companies accountable.</p><p>In this episode we ask…</p><ul><li>What is section 230 and how to platforms use it to their benefit?</li><li>How can we take the design decisions of a corporation <em>out</em> of the free speech bucket so that they can be held responsible for their actions?</li><li>How can we start developing more levers from transparency, beyond lawsuits and whistleblowing?</li></ul><p><em>Meetali Jain is a human rights lawyer, who founded the </em><a href="https://techjusticelaw.org/"><em>Tech Justice Law Project</em></a><em> in 2023. The Project works with a collective of legal experts, policy advocates, digital rights organizations, and technologists to ensure that legal and policy frameworks are fit for the digital age, and that online spaces are safer and more accountable.</em></p><p>This episode was hosted by Alix Dunn and Prathm Juneja</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
    </item>
    <item>
      <title>Exhibit X: The Whistleblower</title>
      <itunes:episode>10</itunes:episode>
      <podcast:episode>10</podcast:episode>
      <itunes:title>Exhibit X: The Whistleblower</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4fc93149-5fca-40c8-ae37-e1e43f60f7d4</guid>
      <link>https://share.transistor.fm/s/96cd159f</link>
      <description>
        <![CDATA[<p>In part 2 of Exhibit X, Alix interviewed Frances Haugen, who In 2021 blew the whistle on Meta; they were sitting on the knowledge that their products were harmful to kids, and yet — shocker — they continued to make design decisions that would keep kids engaged.</p><p>Mark Zuckerberg worked hard on his image (it’s a hydrofoil, not a surfboard!), while Instagram was being used for human trafficking — the lack of care and accountability here absolutely melts the mind.</p><p>What conversations did Frances’s whistleblowing start?</p><p>Was whistleblowing an effective mechanism for accountability in this case?</p><p>Do we have to add age verification to social media sites or break end-to-end encryption to keep children safe online?</p><p><a href="https://en.wikipedia.org/wiki/Frances_Haugen">*Frances Haugen</a> is a data scientist &amp; engineer. In 2021 she disclosed 22,000 internal documents to The Wall Street Journal and the Securities &amp; Exchanges Commission which demonstrated Meta’s knowledge of their products harms.*</p><p>Your hosts this week are Alix Dunn and Prathm Juneja</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In part 2 of Exhibit X, Alix interviewed Frances Haugen, who In 2021 blew the whistle on Meta; they were sitting on the knowledge that their products were harmful to kids, and yet — shocker — they continued to make design decisions that would keep kids engaged.</p><p>Mark Zuckerberg worked hard on his image (it’s a hydrofoil, not a surfboard!), while Instagram was being used for human trafficking — the lack of care and accountability here absolutely melts the mind.</p><p>What conversations did Frances’s whistleblowing start?</p><p>Was whistleblowing an effective mechanism for accountability in this case?</p><p>Do we have to add age verification to social media sites or break end-to-end encryption to keep children safe online?</p><p><a href="https://en.wikipedia.org/wiki/Frances_Haugen">*Frances Haugen</a> is a data scientist &amp; engineer. In 2021 she disclosed 22,000 internal documents to The Wall Street Journal and the Securities &amp; Exchanges Commission which demonstrated Meta’s knowledge of their products harms.*</p><p>Your hosts this week are Alix Dunn and Prathm Juneja</p>]]>
      </content:encoded>
      <pubDate>Fri, 02 Aug 2024 01:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/96cd159f/5e1dffba.mp3" length="26658856" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>1900</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In part 2 of Exhibit X, Alix interviewed Frances Haugen, who In 2021 blew the whistle on Meta; they were sitting on the knowledge that their products were harmful to kids, and yet — shocker — they continued to make design decisions that would keep kids engaged.</p><p>Mark Zuckerberg worked hard on his image (it’s a hydrofoil, not a surfboard!), while Instagram was being used for human trafficking — the lack of care and accountability here absolutely melts the mind.</p><p>What conversations did Frances’s whistleblowing start?</p><p>Was whistleblowing an effective mechanism for accountability in this case?</p><p>Do we have to add age verification to social media sites or break end-to-end encryption to keep children safe online?</p><p><a href="https://en.wikipedia.org/wiki/Frances_Haugen">*Frances Haugen</a> is a data scientist &amp; engineer. In 2021 she disclosed 22,000 internal documents to The Wall Street Journal and the Securities &amp; Exchanges Commission which demonstrated Meta’s knowledge of their products harms.*</p><p>Your hosts this week are Alix Dunn and Prathm Juneja</p>]]>
      </itunes:summary>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
    </item>
    <item>
      <title>Exhibit X: Tech and Tobacco</title>
      <itunes:episode>9</itunes:episode>
      <podcast:episode>9</podcast:episode>
      <itunes:title>Exhibit X: Tech and Tobacco</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f052b19f-5e34-4b84-bfd5-a44bfbe3ee18</guid>
      <link>https://share.transistor.fm/s/ee6a7587</link>
      <description>
        <![CDATA[<p>Here is something you’re probably tired of hearing: Big Tech is responsible for a bottomless brunch of societal harms. And they are not being held accountable. Right now it feels as though we hear constantly about laws, regulation, courts. But none of it is effective in litigating against Big Tech.</p><p>In our latest podcast series Exhibit X, we’re looking at how the tides might finally be turning. Legal accountability could be around the corner, but only if a few things happen first.</p><p>To start, we look back to 1964. When Big Tobacco was winning the ‘try your best to profit from harm’ race. Research showed cigarettes were addictive and also caused cancer — and yet the industry evaded accountability for decades.</p><p>In this episode we ask questions like:</p><ul><li>Why wasn’t a report in 1964 showing cigarettes are addictive and cause cancer enough to transform the industry?</li><li>What can we learn about corporate capture of research on tobacco?</li><li>How did academia and experts shape the outcomes of court cases?</li></ul><p><a href="https://www.linkedin.com/in/prathmjuneja/">Prathm Juneja</a> was Alix’s co-host for this episode. He is a PhD Candidate in Social Data Science at the Oxford Internet Institute Working at the intersection of academia, industry, and government on technology, innovation, and policy.</p><p>Further reading</p><ul><li><a href="https://www.c-span.org/video/?115279-1/tobacco-settlement">C-SPAN: Tobacco Settlement</a></li><li><a href="https://publishing.cdlib.org/ucpressebooks/view?docId=ft8489p25j;chunk.id=0;doc.view=print"> The Cigarette Papers - Full Online Version</a></li><li><a href="https://www.industrydocuments.ucsf.edu/tobacco/about/history/">The Truth Tobacco Industry Documents</a></li><li><a href="https://www.thenation.com/article/archive/big-tobacco-and-historians/"> Big Tobacco and the Historians</a></li><li><a href="https://www.industrydocuments.ucsf.edu/tobacco/research-tools/litigation-documents/"> Tobacco Litigation Documents</a></li><li><a href="https://www.nytimes.com/1999/10/15/us/a-tobacco-whistle-blower-s-life-is-transformed.html"> A Tobacco Whistle-Blower's Life Is Transformed</a></li><li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3490543/">Inventing Conflicts of Interest: A History of Tobacco Industry Tactics</a></li><li><a href="https://tobaccotactics.org/article/tobacco-industry-research-committee/"> Tobacco Industry Research Committee</a></li><li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2636463/">Experts Debating Tobacco Addiction</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Here is something you’re probably tired of hearing: Big Tech is responsible for a bottomless brunch of societal harms. And they are not being held accountable. Right now it feels as though we hear constantly about laws, regulation, courts. But none of it is effective in litigating against Big Tech.</p><p>In our latest podcast series Exhibit X, we’re looking at how the tides might finally be turning. Legal accountability could be around the corner, but only if a few things happen first.</p><p>To start, we look back to 1964. When Big Tobacco was winning the ‘try your best to profit from harm’ race. Research showed cigarettes were addictive and also caused cancer — and yet the industry evaded accountability for decades.</p><p>In this episode we ask questions like:</p><ul><li>Why wasn’t a report in 1964 showing cigarettes are addictive and cause cancer enough to transform the industry?</li><li>What can we learn about corporate capture of research on tobacco?</li><li>How did academia and experts shape the outcomes of court cases?</li></ul><p><a href="https://www.linkedin.com/in/prathmjuneja/">Prathm Juneja</a> was Alix’s co-host for this episode. He is a PhD Candidate in Social Data Science at the Oxford Internet Institute Working at the intersection of academia, industry, and government on technology, innovation, and policy.</p><p>Further reading</p><ul><li><a href="https://www.c-span.org/video/?115279-1/tobacco-settlement">C-SPAN: Tobacco Settlement</a></li><li><a href="https://publishing.cdlib.org/ucpressebooks/view?docId=ft8489p25j;chunk.id=0;doc.view=print"> The Cigarette Papers - Full Online Version</a></li><li><a href="https://www.industrydocuments.ucsf.edu/tobacco/about/history/">The Truth Tobacco Industry Documents</a></li><li><a href="https://www.thenation.com/article/archive/big-tobacco-and-historians/"> Big Tobacco and the Historians</a></li><li><a href="https://www.industrydocuments.ucsf.edu/tobacco/research-tools/litigation-documents/"> Tobacco Litigation Documents</a></li><li><a href="https://www.nytimes.com/1999/10/15/us/a-tobacco-whistle-blower-s-life-is-transformed.html"> A Tobacco Whistle-Blower's Life Is Transformed</a></li><li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3490543/">Inventing Conflicts of Interest: A History of Tobacco Industry Tactics</a></li><li><a href="https://tobaccotactics.org/article/tobacco-industry-research-committee/"> Tobacco Industry Research Committee</a></li><li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2636463/">Experts Debating Tobacco Addiction</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 26 Jul 2024 00:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/ee6a7587/cacc6ad3.mp3" length="22749668" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>1624</itunes:duration>
      <itunes:summary>Here is something you’re probably tired of hearing: Big Tech is responsible for a bottomless brunch of societal harms. And they are not being held accountable. Right now it feels as though we hear constantly about laws, regulation, courts. But none of it is effective in litigating against Big Tech. In our latest podcast series Exhibit X, we’re looking at how the tides might finally be turning. Legal accountability could be around the corner, but only if a few things happen first. To start, we look back to 1964. When Big Tobacco was winning the ‘try your best to profit from harm’ race. Research showed cigarettes were addictive and also caused cancer — and yet the industry evaded accountability for decades. In this episode we ask questions like:  Why wasn’t a report in 1964 showing cigarettes are addictive and cause cancer enough to transform the industry? What can we learn about corporate capture of research on tobacco? How did academia and experts shape the outcomes of court cases?  Prathm Juneja was Alix’s co-host for this episode. He is a PhD Candidate in Social Data Science at the Oxford Internet Institute Working at the intersection of academia, industry, and government on technology, innovation, and policy. Further reading  C-SPAN: Tobacco Settlement  The Cigarette Papers - Full Online Version The Truth Tobacco Industry Documents  Big Tobacco and the Historians  Tobacco Litigation Documents  A Tobacco Whistle-Blower's Life Is Transformed Inventing Conflicts of Interest: A History of Tobacco Industry Tactics  Tobacco Industry Research Committee Experts Debating Tobacco Addiction </itunes:summary>
      <itunes:subtitle>Here is something you’re probably tired of hearing: Big Tech is responsible for a bottomless brunch of societal harms. And they are not being held accountable. Right now it feels as though we hear constantly about laws, regulation, courts. But none of it </itunes:subtitle>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
    </item>
    <item>
      <title>New mini-series: Exhibit X</title>
      <itunes:episode>8</itunes:episode>
      <podcast:episode>8</podcast:episode>
      <itunes:title>New mini-series: Exhibit X</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2e8d195f-dfd2-4682-b5ec-d1cfd04ae175</guid>
      <link>https://share.transistor.fm/s/c7c35e73</link>
      <description>
        <![CDATA[<p>In the Exhibit X series Alix and Prathm sink their fingernails into the tangled universe of litigation and Big Tech; how have the courts held Big Tech firms accountable for their various harms over the years? Is whistleblowing an effective mechanism for informing new regulations? What about a social media platform’s first amendment rights? So much to cover, so many episodes coming your way!</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In the Exhibit X series Alix and Prathm sink their fingernails into the tangled universe of litigation and Big Tech; how have the courts held Big Tech firms accountable for their various harms over the years? Is whistleblowing an effective mechanism for informing new regulations? What about a social media platform’s first amendment rights? So much to cover, so many episodes coming your way!</p>]]>
      </content:encoded>
      <pubDate>Thu, 18 Jul 2024 00:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/c7c35e73/b246a00f.mp3" length="2848865" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>237</itunes:duration>
      <itunes:summary>In the Exhibit X series Alix and Prathm sink their fingernails into the tangled universe of litigation and Big Tech; how have the courts held Big Tech firms accountable for their various harms over the years? Is whistleblowing an effective mechanism for informing new regulations? What about a social media platform’s first amendment rights? So much to cover, so many episodes coming your way!</itunes:summary>
      <itunes:subtitle>In the Exhibit X series Alix and Prathm sink their fingernails into the tangled universe of litigation and Big Tech; how have the courts held Big Tech firms accountable for their various harms over the years? Is whistleblowing an effective mechanism for i</itunes:subtitle>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>What the FAccT? Evidence of bias. Now what?</title>
      <itunes:episode>7</itunes:episode>
      <podcast:episode>7</podcast:episode>
      <itunes:title>What the FAccT? Evidence of bias. Now what?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7cb08cbf-aa35-497d-8670-fa59ef2595d9</guid>
      <link>https://share.transistor.fm/s/7f564e17</link>
      <description>
        <![CDATA[<p>In part four of our FAccT deep dive, Alix joins Marta Ziosi and Dasha Pruss to discuss their paper “Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool”.</p><p>In their paper they discuss how an erosion of public trust can lead to ‘any idea will do’ decisions, and often these lean on technology, such as predictive policing systems. One such tool is the Shot Spotter, a piece of audio surveillance tech designed to detect gunfire — a contentious system which has been sold both as a tool for police to surveil civilians, and as a tool for civilians to keep tabs on police. Can it really be both?</p><p>Marta Ziosi is a Postdoctoral Researcher at the Oxford Martin AI Governance Initiative, where her research focuses on standards for frontier AI. She has worked for institutions such as DG CNECT at the European Commission, the Berkman Klein Centre for Internet &amp; Society at Harvard University, The Montreal International Center of Expertise in Artificial Intelligence (CEIMIA) and The Future Society. Previously, Marta was a Ph.D. student and researcher on Algorithmic Bias and AI Policy at the Oxford Internet Institute. She is also the founder of AI for People, a non-profit organisation whose mission is to put technology at the service of people.  Marta holds a BSc in Mathematics and Philosophy from University College Maastricht. She also holds an MSc in Philosophy and Public Policy and an executive degree in Chinese Language and Culture for Business from the London School of Economics.</p><p>Dasha Pruss is a 2023-2024 fellow at the Berkman Klein Center for Internet &amp; Society and an Embedded EthiCS postdoctoral fellow at Harvard University. In fall 2024 she will be an assistant professor of philosophy and computer science at George Mason University. She received her PhD in History &amp; Philosophy of Science from the University of Pittsburgh in May 2023, and holds a BSc in Computer Science from the University of Utah. She has also co-organized with Against Carceral Tech, an activist group working to ban facial recognition and predictive policing in the city of Pittsburgh.</p><p>This episode is hosted by <a href="https://www.alixdunn.com/">Alix Dunn</a>. Our guests are  <a href="https://www.linkedin.com/in/marta-ziosi-3342007a/?originalSubdomain=uk"> Marta Ziosi</a> and <a href="https://www.dashapruss.com/">Dasha Prussi</a></p><p>Further Reading</p><ul><li><a href="https://facctconference.org/static/papers24/facct24-106.pdf"> Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool</a></li><li><a href="https://mitpressonpubpub.mitpress.mit.edu/pub/cf-chap6/release/6?readingCollection=16608c58"> Refusing and Reusing Data by Catherine D’Ignazio</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In part four of our FAccT deep dive, Alix joins Marta Ziosi and Dasha Pruss to discuss their paper “Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool”.</p><p>In their paper they discuss how an erosion of public trust can lead to ‘any idea will do’ decisions, and often these lean on technology, such as predictive policing systems. One such tool is the Shot Spotter, a piece of audio surveillance tech designed to detect gunfire — a contentious system which has been sold both as a tool for police to surveil civilians, and as a tool for civilians to keep tabs on police. Can it really be both?</p><p>Marta Ziosi is a Postdoctoral Researcher at the Oxford Martin AI Governance Initiative, where her research focuses on standards for frontier AI. She has worked for institutions such as DG CNECT at the European Commission, the Berkman Klein Centre for Internet &amp; Society at Harvard University, The Montreal International Center of Expertise in Artificial Intelligence (CEIMIA) and The Future Society. Previously, Marta was a Ph.D. student and researcher on Algorithmic Bias and AI Policy at the Oxford Internet Institute. She is also the founder of AI for People, a non-profit organisation whose mission is to put technology at the service of people.  Marta holds a BSc in Mathematics and Philosophy from University College Maastricht. She also holds an MSc in Philosophy and Public Policy and an executive degree in Chinese Language and Culture for Business from the London School of Economics.</p><p>Dasha Pruss is a 2023-2024 fellow at the Berkman Klein Center for Internet &amp; Society and an Embedded EthiCS postdoctoral fellow at Harvard University. In fall 2024 she will be an assistant professor of philosophy and computer science at George Mason University. She received her PhD in History &amp; Philosophy of Science from the University of Pittsburgh in May 2023, and holds a BSc in Computer Science from the University of Utah. She has also co-organized with Against Carceral Tech, an activist group working to ban facial recognition and predictive policing in the city of Pittsburgh.</p><p>This episode is hosted by <a href="https://www.alixdunn.com/">Alix Dunn</a>. Our guests are  <a href="https://www.linkedin.com/in/marta-ziosi-3342007a/?originalSubdomain=uk"> Marta Ziosi</a> and <a href="https://www.dashapruss.com/">Dasha Prussi</a></p><p>Further Reading</p><ul><li><a href="https://facctconference.org/static/papers24/facct24-106.pdf"> Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool</a></li><li><a href="https://mitpressonpubpub.mitpress.mit.edu/pub/cf-chap6/release/6?readingCollection=16608c58"> Refusing and Reusing Data by Catherine D’Ignazio</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 12 Jul 2024 00:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/7f564e17/7b52e897.mp3" length="18109532" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>1509</itunes:duration>
      <itunes:summary>In part four of our FAccT deep dive, Alix joins Marta Ziosi and Dasha Pruss to discuss their paper “Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool”. In their paper they discuss how an erosion of public trust can lead to ‘any idea will do’ decisions, and often these lean on technology, such as predictive policing systems. One such tool is the Shot Spotter, a piece of audio surveillance tech designed to detect gunfire — a contentious system which has been sold both as a tool for police to surveil civilians, and as a tool for civilians to keep tabs on police. Can it really be both? Marta Ziosi is a Postdoctoral Researcher at the Oxford Martin AI Governance Initiative, where her research focuses on standards for frontier AI. She has worked for institutions such as DG CNECT at the European Commission, the Berkman Klein Centre for Internet &amp;amp; Society at Harvard University, The Montreal International Center of Expertise in Artificial Intelligence (CEIMIA) and The Future Society. Previously, Marta was a Ph.D. student and researcher on Algorithmic Bias and AI Policy at the Oxford Internet Institute. She is also the founder of AI for People, a non-profit organisation whose mission is to put technology at the service of people.  Marta holds a BSc in Mathematics and Philosophy from University College Maastricht. She also holds an MSc in Philosophy and Public Policy and an executive degree in Chinese Language and Culture for Business from the London School of Economics. Dasha Pruss is a 2023-2024 fellow at the Berkman Klein Center for Internet &amp;amp; Society and an Embedded EthiCS postdoctoral fellow at Harvard University. In fall 2024 she will be an assistant professor of philosophy and computer science at George Mason University. She received her PhD in History &amp;amp; Philosophy of Science from the University of Pittsburgh in May 2023, and holds a BSc in Computer Science from the University of Utah. She has also co-organized with Against Carceral Tech, an activist group working to ban facial recognition and predictive policing in the city of Pittsburgh. This episode is hosted by Alix Dunn. Our guests are   Marta Ziosi and Dasha Prussi Further Reading   Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool  Refusing and Reusing Data by Catherine D’Ignazio </itunes:summary>
      <itunes:subtitle>In part four of our FAccT deep dive, Alix joins Marta Ziosi and Dasha Pruss to discuss their paper “Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool”. In their paper they discuss how an erosion of p</itunes:subtitle>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>What the FAccT? First law, bad law</title>
      <itunes:episode>6</itunes:episode>
      <podcast:episode>6</podcast:episode>
      <itunes:title>What the FAccT? First law, bad law</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ddee6d4a-b481-4fd8-9268-43a8c8d991bb</guid>
      <link>https://share.transistor.fm/s/f970263e</link>
      <description>
        <![CDATA[<p>In this episode, we speak with Lara Groves and Jacob Metcalf  at the seventh annual FAccT conference in Rio de Janeiro.</p><p>In part four of our FAccT deep dive, Alix joins Lara Groves and Jacob Metcalf  to discuss their paper “ Auditing Work: Exploring the New York City algorithmic bias audit regime”.</p><p>Lara Groves is a Senior Researcher at the Ada Lovelace Institute. Her most recent project explored the role of third-party auditing regimes in AI governance. Lara has previously led research on the role of public participation in commercial AI labs, and on algorithmic impact assessments. Her research interests include practical and participatory approaches to algorithmic accountability and innovative policy solutions to challenges of governance.</p><p>Before joining Ada, Lara worked as a tech and internet policy consultant, and has experience in research, public affairs and campaigns for think-tanks, political parties and advocacy groups. Lara has an MSc in Democracy from UCL.</p><p>Jacob Metcalf, PhD, is a researcher at Data &amp; Society, where he leads the AI on the Ground Initiative, and works on an NSF-funded multisite project, Pervasive Data Ethics for Computational Research (PERVADE). For this project, he studies how data ethics practices are emerging in environments that have not previously grappled with research ethics, such as industry, IRBs, and civil society organizations. His recent work has focused on the new organizational roles that have developed around AI ethics in tech companies.</p><p>Jake’s consulting firm, Ethical Resolve, provides a range of ethics services, helping clients to make well-informed, consistent, actionable, and timely business decisions that reflect their values. He also serves as the Ethics Subgroup Chair for the IEEE P7000 Standard.</p><p>This episode is hosted by <a href="https://www.alixdunn.com/">Alix Dunn</a>. Our guests are <a href="https://www.adalovelaceinstitute.org/person/lara-groves/">Lara Groves</a> and <a href="https://datasociety.net/people/metcalf-jacob/">Jacob Metcalf</a>.</p><p>Further Reading</p><ul><li>Lara Groves (Ada Lovelace Institute, UK), Jacob Metcalf (Data &amp; Society Research Institute, USA), Alayna Kennedy (Independent researcher, USA), Briana Vecchione (Data &amp; Society Research Institute, USA) and Andrew Strait (Ada Lovelace Institute, UK)- <a href="https://facctconference.org/static/papers24/facct24-74.pdf">Auditing Work: Exploring the New York City algorithmic bias audit regime</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we speak with Lara Groves and Jacob Metcalf  at the seventh annual FAccT conference in Rio de Janeiro.</p><p>In part four of our FAccT deep dive, Alix joins Lara Groves and Jacob Metcalf  to discuss their paper “ Auditing Work: Exploring the New York City algorithmic bias audit regime”.</p><p>Lara Groves is a Senior Researcher at the Ada Lovelace Institute. Her most recent project explored the role of third-party auditing regimes in AI governance. Lara has previously led research on the role of public participation in commercial AI labs, and on algorithmic impact assessments. Her research interests include practical and participatory approaches to algorithmic accountability and innovative policy solutions to challenges of governance.</p><p>Before joining Ada, Lara worked as a tech and internet policy consultant, and has experience in research, public affairs and campaigns for think-tanks, political parties and advocacy groups. Lara has an MSc in Democracy from UCL.</p><p>Jacob Metcalf, PhD, is a researcher at Data &amp; Society, where he leads the AI on the Ground Initiative, and works on an NSF-funded multisite project, Pervasive Data Ethics for Computational Research (PERVADE). For this project, he studies how data ethics practices are emerging in environments that have not previously grappled with research ethics, such as industry, IRBs, and civil society organizations. His recent work has focused on the new organizational roles that have developed around AI ethics in tech companies.</p><p>Jake’s consulting firm, Ethical Resolve, provides a range of ethics services, helping clients to make well-informed, consistent, actionable, and timely business decisions that reflect their values. He also serves as the Ethics Subgroup Chair for the IEEE P7000 Standard.</p><p>This episode is hosted by <a href="https://www.alixdunn.com/">Alix Dunn</a>. Our guests are <a href="https://www.adalovelaceinstitute.org/person/lara-groves/">Lara Groves</a> and <a href="https://datasociety.net/people/metcalf-jacob/">Jacob Metcalf</a>.</p><p>Further Reading</p><ul><li>Lara Groves (Ada Lovelace Institute, UK), Jacob Metcalf (Data &amp; Society Research Institute, USA), Alayna Kennedy (Independent researcher, USA), Briana Vecchione (Data &amp; Society Research Institute, USA) and Andrew Strait (Ada Lovelace Institute, UK)- <a href="https://facctconference.org/static/papers24/facct24-74.pdf">Auditing Work: Exploring the New York City algorithmic bias audit regime</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 05 Jul 2024 00:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/f970263e/9c7f6c9b.mp3" length="17222688" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>1435</itunes:duration>
      <itunes:summary>In this episode, we speak with Lara Groves and Jacob Metcalf  at the seventh annual FAccT conference in Rio de Janeiro. In part four of our FAccT deep dive, Alix joins Lara Groves and Jacob Metcalf  to discuss their paper “ Auditing Work: Exploring the New York City algorithmic bias audit regime”. Lara Groves is a Senior Researcher at the Ada Lovelace Institute. Her most recent project explored the role of third-party auditing regimes in AI governance. Lara has previously led research on the role of public participation in commercial AI labs, and on algorithmic impact assessments. Her research interests include practical and participatory approaches to algorithmic accountability and innovative policy solutions to challenges of governance. Before joining Ada, Lara worked as a tech and internet policy consultant, and has experience in research, public affairs and campaigns for think-tanks, political parties and advocacy groups. Lara has an MSc in Democracy from UCL. Jacob Metcalf, PhD, is a researcher at Data &amp;amp; Society, where he leads the AI on the Ground Initiative, and works on an NSF-funded multisite project, Pervasive Data Ethics for Computational Research (PERVADE). For this project, he studies how data ethics practices are emerging in environments that have not previously grappled with research ethics, such as industry, IRBs, and civil society organizations. His recent work has focused on the new organizational roles that have developed around AI ethics in tech companies. Jake’s consulting firm, Ethical Resolve, provides a range of ethics services, helping clients to make well-informed, consistent, actionable, and timely business decisions that reflect their values. He also serves as the Ethics Subgroup Chair for the IEEE P7000 Standard. This episode is hosted by Alix Dunn. Our guests are Lara Groves and Jacob Metcalf. Further Reading  Lara Groves (Ada Lovelace Institute, UK), Jacob Metcalf (Data &amp;amp; Society Research Institute, USA), Alayna Kennedy (Independent researcher, USA), Briana Vecchione (Data &amp;amp; Society Research Institute, USA) and Andrew Strait (Ada Lovelace Institute, UK)- Auditing Work: Exploring the New York City algorithmic bias audit regime </itunes:summary>
      <itunes:subtitle>In this episode, we speak with Lara Groves and Jacob Metcalf  at the seventh annual FAccT conference in Rio de Janeiro. In part four of our FAccT deep dive, Alix joins Lara Groves and Jacob Metcalf  to discuss their paper “ Auditing Work: Exploring the Ne</itunes:subtitle>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>What the FAccT?: Abandoning Algorithms</title>
      <itunes:episode>5</itunes:episode>
      <podcast:episode>5</podcast:episode>
      <itunes:title>What the FAccT?: Abandoning Algorithms</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5ca09497-2201-4bb4-b3bd-83f5bfe76880</guid>
      <link>https://share.transistor.fm/s/a6cb1b19</link>
      <description>
        <![CDATA[<p>In this episode, we speak with Nari Johnson and Sanika Moharana at this year’s FAccT conference in Rio de Janeiro.</p><p>In part two of our FAccT deep dive, Alix joins <a href="https://njohnson99.github.io/">Nari Johnson</a> and <a href="https://www.sanikamoharana.com/">Sanika Moharana</a> to discuss their paper “The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment”.</p><p>Nari Johnson is a third-year PhD student in Carnegie Mellon University's Machine Learning Department, where she is advised by Hoda Heidari. She graduated from Harvard in 2021 with a BA and MS in Computer Science, where she previously worked with Finale Doshi-Velez.</p><p>Sanika Moharana is a second-year PhD student in Human Computer Interaction at Carnegie Mellon University. As an advocate for human-centered design and research, Sanika practices iterative ideation and prototyping for multimodal interactions and interfaces across intelligent systems, connected smart devices, IOT’s, AI experiences, and emerging technologies .</p><p>Further Reading</p><ul><li><a href="https://facctconference.org/static/papers24/facct24-25.pdf"> The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we speak with Nari Johnson and Sanika Moharana at this year’s FAccT conference in Rio de Janeiro.</p><p>In part two of our FAccT deep dive, Alix joins <a href="https://njohnson99.github.io/">Nari Johnson</a> and <a href="https://www.sanikamoharana.com/">Sanika Moharana</a> to discuss their paper “The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment”.</p><p>Nari Johnson is a third-year PhD student in Carnegie Mellon University's Machine Learning Department, where she is advised by Hoda Heidari. She graduated from Harvard in 2021 with a BA and MS in Computer Science, where she previously worked with Finale Doshi-Velez.</p><p>Sanika Moharana is a second-year PhD student in Human Computer Interaction at Carnegie Mellon University. As an advocate for human-centered design and research, Sanika practices iterative ideation and prototyping for multimodal interactions and interfaces across intelligent systems, connected smart devices, IOT’s, AI experiences, and emerging technologies .</p><p>Further Reading</p><ul><li><a href="https://facctconference.org/static/papers24/facct24-25.pdf"> The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 28 Jun 2024 00:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/a6cb1b19/cb12a976.mp3" length="21593427" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>1799</itunes:duration>
      <itunes:summary>In this episode, we speak with Nari Johnson and Sanika Moharana at this year’s FAccT conference in Rio de Janeiro. In part two of our FAccT deep dive, Alix joins Nari Johnson and Sanika Moharana to discuss their paper “The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment”. Nari Johnson is a third-year PhD student in Carnegie Mellon University's Machine Learning Department, where she is advised by Hoda Heidari. She graduated from Harvard in 2021 with a BA and MS in Computer Science, where she previously worked with Finale Doshi-Velez. Sanika Moharana is a second-year PhD student in Human Computer Interaction at Carnegie Mellon University. As an advocate for human-centered design and research, Sanika practices iterative ideation and prototyping for multimodal interactions and interfaces across intelligent systems, connected smart devices, IOT’s, AI experiences, and emerging technologies . Further Reading   The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment </itunes:summary>
      <itunes:subtitle>In this episode, we speak with Nari Johnson and Sanika Moharana at this year’s FAccT conference in Rio de Janeiro. In part two of our FAccT deep dive, Alix joins Nari Johnson and Sanika Moharana to discuss their paper “The Fall of an Algorithm: Characteri</itunes:subtitle>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>What the FAccT?: Reformers and Radicals</title>
      <itunes:episode>4</itunes:episode>
      <podcast:episode>4</podcast:episode>
      <itunes:title>What the FAccT?: Reformers and Radicals</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">30501930-1f6a-4a60-8de3-aa6bd57113cf</guid>
      <link>https://share.transistor.fm/s/9404f8c5</link>
      <description>
        <![CDATA[<p>In part 1 of our FAccT conference deep dive, Alix Dunn sits down with co-host Andrew Strait from the Ada Lovelace Institute to talk about the history of FAccT and some of the papers being presented at this year’s event.</p><p>The Fairness, Accountability and Transparency Conference, or FAccT is an interdisciplinary conference dedicated to bringing together a diverse community of scholars and exploring how socio-technical systems could be built in a way that is compatible with a fair society. The seventh annual FAccT conference was held in Rio de Janeiro, Brazil, from Monday, June 3rd through Thursday, June 6th 2024 with over five hundred people in attendance.</p><p>This episode is hosted by <a href="https://www.alixdunn.com/">Alix Dunn</a> and our Co-Host is <a href="https://www.adalovelaceinstitute.org/person/andrew-strait/">Andrew Strait</a></p><p>Further Reading:</p><ul><li>Robert Gorwa (WZB Berlin Social Science Center, Germany) and Michael Veale (University College London, UK)- <a href="https://doi.org/10.31235/osf.io/6dfk3">Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries</a></li><li>Marta Ziosi (University of Oxford, UK) and Dasha Pruss (Harvard University, USA)- <a href="https://facctconference.org/static/papers24/facct24-106.pdf">Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool</a></li><li>Michael Madaio (Google Research, USA), Shivani Kapania (Carnegie Mellon University, USA), Rida Qadri (Google Research, USA), Ding Wang (Google Research, USA), Andrew Zaldivar (Google Research, USA), Remi Denton (Google Research, USA) and Lauren Wilcox (eBay, USA)- <a href="https://facctconference.org/static/papers24/facct24-103.pdf">Learning about Responsible AI On-The-Job: Learning Pathways, Orientations, and Aspirations</a></li><li>David Gray Widder (Digital Life Initiative, Cornell Tech, USA)- <a href="https://facctconference.org/static/papers24/facct24-88.pdf"> Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints</a></li><li>Lara Groves (Ada Lovelace Institute, UK), Jacob Metcalf (Data &amp; Society Research Institute, USA), Alayna Kennedy (Independent researcher, USA), Briana Vecchione (Data &amp; Society Research Institute, USA) and Andrew Strait (Ada Lovelace Institute, UK)- <a href="https://facctconference.org/static/papers24/facct24-74.pdf">Auditing Work: Exploring the New York City algorithmic bias audit regime</a></li><li>Nari Johnson (Carnegie Mellon University, USA), Sanika Moharana (Carnegie Mellon University, USA), Christina Harrington (Carnegie Mellon University, USA), Nazanin Andalibi (University of Michigan, USA), Hoda Heidari (Carnegie Mellon University, USA) and Motahhare Eslami (Carnegie Mellon University, USA)- <a href="https://facctconference.org/static/papers24/facct24-25.pdf">The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In part 1 of our FAccT conference deep dive, Alix Dunn sits down with co-host Andrew Strait from the Ada Lovelace Institute to talk about the history of FAccT and some of the papers being presented at this year’s event.</p><p>The Fairness, Accountability and Transparency Conference, or FAccT is an interdisciplinary conference dedicated to bringing together a diverse community of scholars and exploring how socio-technical systems could be built in a way that is compatible with a fair society. The seventh annual FAccT conference was held in Rio de Janeiro, Brazil, from Monday, June 3rd through Thursday, June 6th 2024 with over five hundred people in attendance.</p><p>This episode is hosted by <a href="https://www.alixdunn.com/">Alix Dunn</a> and our Co-Host is <a href="https://www.adalovelaceinstitute.org/person/andrew-strait/">Andrew Strait</a></p><p>Further Reading:</p><ul><li>Robert Gorwa (WZB Berlin Social Science Center, Germany) and Michael Veale (University College London, UK)- <a href="https://doi.org/10.31235/osf.io/6dfk3">Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries</a></li><li>Marta Ziosi (University of Oxford, UK) and Dasha Pruss (Harvard University, USA)- <a href="https://facctconference.org/static/papers24/facct24-106.pdf">Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool</a></li><li>Michael Madaio (Google Research, USA), Shivani Kapania (Carnegie Mellon University, USA), Rida Qadri (Google Research, USA), Ding Wang (Google Research, USA), Andrew Zaldivar (Google Research, USA), Remi Denton (Google Research, USA) and Lauren Wilcox (eBay, USA)- <a href="https://facctconference.org/static/papers24/facct24-103.pdf">Learning about Responsible AI On-The-Job: Learning Pathways, Orientations, and Aspirations</a></li><li>David Gray Widder (Digital Life Initiative, Cornell Tech, USA)- <a href="https://facctconference.org/static/papers24/facct24-88.pdf"> Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints</a></li><li>Lara Groves (Ada Lovelace Institute, UK), Jacob Metcalf (Data &amp; Society Research Institute, USA), Alayna Kennedy (Independent researcher, USA), Briana Vecchione (Data &amp; Society Research Institute, USA) and Andrew Strait (Ada Lovelace Institute, UK)- <a href="https://facctconference.org/static/papers24/facct24-74.pdf">Auditing Work: Exploring the New York City algorithmic bias audit regime</a></li><li>Nari Johnson (Carnegie Mellon University, USA), Sanika Moharana (Carnegie Mellon University, USA), Christina Harrington (Carnegie Mellon University, USA), Nazanin Andalibi (University of Michigan, USA), Hoda Heidari (Carnegie Mellon University, USA) and Motahhare Eslami (Carnegie Mellon University, USA)- <a href="https://facctconference.org/static/papers24/facct24-25.pdf">The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 21 Jun 2024 00:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/9404f8c5/f81f761f.mp3" length="39008178" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3250</itunes:duration>
      <itunes:summary>In part 1 of our FAccT conference deep dive, Alix Dunn sits down with co-host Andrew Strait from the Ada Lovelace Institute to talk about the history of FAccT and some of the papers being presented at this year’s event. The Fairness, Accountability and Transparency Conference, or FAccT is an interdisciplinary conference dedicated to bringing together a diverse community of scholars and exploring how socio-technical systems could be built in a way that is compatible with a fair society. The seventh annual FAccT conference was held in Rio de Janeiro, Brazil, from Monday, June 3rd through Thursday, June 6th 2024 with over five hundred people in attendance. This episode is hosted by Alix Dunn and our Co-Host is Andrew Strait Further Reading:  Robert Gorwa (WZB Berlin Social Science Center, Germany) and Michael Veale (University College London, UK)- Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries Marta Ziosi (University of Oxford, UK) and Dasha Pruss (Harvard University, USA)- Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool Michael Madaio (Google Research, USA), Shivani Kapania (Carnegie Mellon University, USA), Rida Qadri (Google Research, USA), Ding Wang (Google Research, USA), Andrew Zaldivar (Google Research, USA), Remi Denton (Google Research, USA) and Lauren Wilcox (eBay, USA)- Learning about Responsible AI On-The-Job: Learning Pathways, Orientations, and Aspirations David Gray Widder (Digital Life Initiative, Cornell Tech, USA)-  Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints Lara Groves (Ada Lovelace Institute, UK), Jacob Metcalf (Data &amp;amp; Society Research Institute, USA), Alayna Kennedy (Independent researcher, USA), Briana Vecchione (Data &amp;amp; Society Research Institute, USA) and Andrew Strait (Ada Lovelace Institute, UK)- Auditing Work: Exploring the New York City algorithmic bias audit regime Nari Johnson (Carnegie Mellon University, USA), Sanika Moharana (Carnegie Mellon University, USA), Christina Harrington (Carnegie Mellon University, USA), Nazanin Andalibi (University of Michigan, USA), Hoda Heidari (Carnegie Mellon University, USA) and Motahhare Eslami (Carnegie Mellon University, USA)- The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment </itunes:summary>
      <itunes:subtitle>In part 1 of our FAccT conference deep dive, Alix Dunn sits down with co-host Andrew Strait from the Ada Lovelace Institute to talk about the history of FAccT and some of the papers being presented at this year’s event. The Fairness, Accountability and Tr</itunes:subtitle>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Protesting Project Nimbus: employee organising to end Google’s contract with Israel w/ Dr.Kate Sim</title>
      <itunes:episode>3</itunes:episode>
      <podcast:episode>3</podcast:episode>
      <itunes:title>Protesting Project Nimbus: employee organising to end Google’s contract with Israel w/ Dr.Kate Sim</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">04ba3458-9979-4ac5-93a1-2243b2e437b8</guid>
      <link>https://share.transistor.fm/s/5939406a</link>
      <description>
        <![CDATA[<p>In this episode, we speak with Dr. Kate Sim, one of the core organisers of the Google Worker Sit-In Against Project Nimbus.</p> <p>Dr. Kate Sim was recently fired, alongside almost 50 other employees, from Google after helping organize a sit-in protesting Project Nimbus, a joint contract between Google and Amazon to provide technology to the Israeli government and military. In this episode, Alix and Dr. Sim discuss technology-enabled violence, Dr. Sim's work in trust and safety, and Google's cancelled project Maven. They also talk about Dr. Sim's journey into protesting Project Nimbus, the many other voices fighting against the contract, and how Big Tech often obfuscates its responsibility in perpetuating violence. In the end, we arrive at a common lesson: solidarity is our main hope for change.</p> <p>This episode is hosted by <a class="c-link" href="https://www.alixdunn.com/" rel="noopener noreferrer">Alix Dunn</a> and our guest is <a class="c-link" href="https://www.linkedin.com/in/katejsim/" rel="noopener noreferrer">Dr. Kate Sim</a>.</p> <p>Further Reading</p> <ul> <li><a href="https://www.notechforapartheid.com/">No Tech For Apartheid</a></li> <li><a href="https://www.aljazeera.com/news/2024/4/23/what-is-project-nimbus-and-why-are-google-workers-protesting-israel-deal"> What is Project Nimbus, and why are Google workers protesting Israel deal?</a></li> <li><a class="c-link" href="https://theintercept.com/2024/05/01/google-amazon-nimbus-israel-weapons-arms-gaza/" rel="noopener">Israeli Weapons Firms Required to Buy Cloud Services From Google and Amazon</a></li> <li><a class="c-link" href="https://www.versobooks.com/en-gb/products/2684-the-palestine-laboratory" rel="noopener">The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World</a></li> <li><a class="c-link" href="https://www.wired.com/story/3-years-maven-uproar-google-warms-pentagon/" rel="noopener noreferrer">3 Years After the Project Maven Uproar, Google Cozies to the Pentagon</a></li> <li><a class="c-link" href="https://home.watson.brown.edu/research/research-briefs/transforming-military-industrial-complex" rel="noopener noreferrer">How Big Tech and Silicon Valley are Transforming the Military-Industrial Complex</a></li> <li><a class="c-link" href="https://www.thenation.com/article/activism/google-firings-gaza-project-nimbus/" rel="noopener noreferrer">Google Fired Us for Protesting Its Complicity in the War on Gaza. But We Won’t Be Silenced.</a></li> <li><a href="https://saysmaybe.ck.page/posts/project-nimbus">Computer Says Maybe Newsletter - Protesting Project Nimbus</a></li> </ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we speak with Dr. Kate Sim, one of the core organisers of the Google Worker Sit-In Against Project Nimbus.</p> <p>Dr. Kate Sim was recently fired, alongside almost 50 other employees, from Google after helping organize a sit-in protesting Project Nimbus, a joint contract between Google and Amazon to provide technology to the Israeli government and military. In this episode, Alix and Dr. Sim discuss technology-enabled violence, Dr. Sim's work in trust and safety, and Google's cancelled project Maven. They also talk about Dr. Sim's journey into protesting Project Nimbus, the many other voices fighting against the contract, and how Big Tech often obfuscates its responsibility in perpetuating violence. In the end, we arrive at a common lesson: solidarity is our main hope for change.</p> <p>This episode is hosted by <a class="c-link" href="https://www.alixdunn.com/" rel="noopener noreferrer">Alix Dunn</a> and our guest is <a class="c-link" href="https://www.linkedin.com/in/katejsim/" rel="noopener noreferrer">Dr. Kate Sim</a>.</p> <p>Further Reading</p> <ul> <li><a href="https://www.notechforapartheid.com/">No Tech For Apartheid</a></li> <li><a href="https://www.aljazeera.com/news/2024/4/23/what-is-project-nimbus-and-why-are-google-workers-protesting-israel-deal"> What is Project Nimbus, and why are Google workers protesting Israel deal?</a></li> <li><a class="c-link" href="https://theintercept.com/2024/05/01/google-amazon-nimbus-israel-weapons-arms-gaza/" rel="noopener">Israeli Weapons Firms Required to Buy Cloud Services From Google and Amazon</a></li> <li><a class="c-link" href="https://www.versobooks.com/en-gb/products/2684-the-palestine-laboratory" rel="noopener">The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World</a></li> <li><a class="c-link" href="https://www.wired.com/story/3-years-maven-uproar-google-warms-pentagon/" rel="noopener noreferrer">3 Years After the Project Maven Uproar, Google Cozies to the Pentagon</a></li> <li><a class="c-link" href="https://home.watson.brown.edu/research/research-briefs/transforming-military-industrial-complex" rel="noopener noreferrer">How Big Tech and Silicon Valley are Transforming the Military-Industrial Complex</a></li> <li><a class="c-link" href="https://www.thenation.com/article/activism/google-firings-gaza-project-nimbus/" rel="noopener noreferrer">Google Fired Us for Protesting Its Complicity in the War on Gaza. But We Won’t Be Silenced.</a></li> <li><a href="https://saysmaybe.ck.page/posts/project-nimbus">Computer Says Maybe Newsletter - Protesting Project Nimbus</a></li> </ul>]]>
      </content:encoded>
      <pubDate>Thu, 23 May 2024 00:00:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/5939406a/976223b9.mp3" length="37232234" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:duration>3102</itunes:duration>
      <itunes:summary>In this episode, we speak with Dr. Kate Sim, one of the core organisers of the Google Worker Sit-In Against Project Nimbus. Dr. Kate Sim was recently fired, alongside almost 50 other employees, from Google after helping organize a sit-in protesting Project Nimbus, a joint contract between Google and Amazon to provide technology to the Israeli government and military. In this episode, Alix and Dr. Sim discuss technology-enabled violence, Dr. Sim's work in trust and safety, and Google's cancelled project Maven. They also talk about Dr. Sim's journey into protesting Project Nimbus, the many other voices fighting against the contract, and how Big Tech often obfuscates its responsibility in perpetuating violence. In the end, we arrive at a common lesson: solidarity is our main hope for change. This episode is hosted by Alix Dunn and our guest is Dr. Kate Sim. Further Reading  No Tech For Apartheid  What is Project Nimbus, and why are Google workers protesting Israel deal? Israeli Weapons Firms Required to Buy Cloud Services From Google and Amazon The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World 3 Years After the Project Maven Uproar, Google Cozies to the Pentagon How Big Tech and Silicon Valley are Transforming the Military-Industrial Complex Google Fired Us for Protesting Its Complicity in the War on Gaza. But We Won’t Be Silenced. Computer Says Maybe Newsletter - Protesting Project Nimbus </itunes:summary>
      <itunes:subtitle>In this episode, we speak with Dr. Kate Sim, one of the core organisers of the Google Worker Sit-In Against Project Nimbus. Dr. Kate Sim was recently fired, alongside almost 50 other employees, from Google after helping organize a sit-in protesting Projec</itunes:subtitle>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
    </item>
    <item>
      <title>The Human in the Loop: What's it like to work in the AI supply chain?</title>
      <itunes:episode>2</itunes:episode>
      <podcast:episode>2</podcast:episode>
      <itunes:title>The Human in the Loop: What's it like to work in the AI supply chain?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fd0eb020-e935-4bc6-9441-6f40fb6762bb</guid>
      <link>https://share.transistor.fm/s/98a09479</link>
      <description>
        <![CDATA[<p>In this episode, we talk about the kinds of jobs that are being created as AI systems grow, how those jobs are evolving, what the labour conditions of those jobs are like and, who is benefitting from these systems.  </p><p>This episode is hosted by <a href="https://www.alixdunn.com/">Alix Dunn</a> and guests include <a href="https://www.linkedin.com/in/jamesoyange/?originalSubdomain=ke">James (Mojez) Oyange</a>, <a href="https://www.yoyoel.com/">Yoel Roth</a>, <a href="https://techequitycollaborative.org/people/catherine-bracy/">Catherine Bracy</a> and <a href="https://www.coricrider.com/">Cori Crider</a>.</p><p>If you have feedback about the episode or a pet subject that you might want to join forces to develop into an episode, please reach out. You can email <a href="mailto:team@saysmaybe.com">team@saysmaybe.com</a> or share an audio note here: <a href="http://speakpipe.com/saysmaybe">speakpipe.com/saysmaybe.</a></p><p>Further Reading</p><p><em>News Articles</em></p><ul><li><a href="https://time.com/6293271/tiktok-bytedance-kenya-moderator-lawsuit/"> Former TikTok Moderator Threatens Lawsuit in Kenya | TIME</a></li><li><a href="https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/"> Inside Facebook's African Sweatshop | TIME</a></li><li><a href="https://www.foxglove.org.uk/2023/10/18/kenya-facebook-moderators-talks-collapse/"> Back to court: Kenya Facebook content moderators settlement talks collapse - Foxglove</a></li><li><a href="https://time.com/6247678/openai-chatgpt-kenya-workers/">OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | TIME</a></li></ul><p><em>Other Links</em></p><ul><li><a href="https://yalebooks.yale.edu/9780300261479/behind-the-screen">Sarah Roberts: Behind the Screen</a></li><li><a href="https://techequitycollaborative.org/">TechEquity website</a></li><li><a href="https://www.foxglove.org.uk/">Foxglove website</a></li><li><a href="https://thesignalsnetwork.org/">Whistleblower Support Organization - The Signals Network</a></li><li><a href="https://www.tspa.org/">Trust &amp; Safety Professional Association (tspa.org)Trust &amp; Safety Professional Association (tspa.org)</a></li><li><a href="https://integrityinstitute.org/">Integrity Institute</a></li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we talk about the kinds of jobs that are being created as AI systems grow, how those jobs are evolving, what the labour conditions of those jobs are like and, who is benefitting from these systems.  </p><p>This episode is hosted by <a href="https://www.alixdunn.com/">Alix Dunn</a> and guests include <a href="https://www.linkedin.com/in/jamesoyange/?originalSubdomain=ke">James (Mojez) Oyange</a>, <a href="https://www.yoyoel.com/">Yoel Roth</a>, <a href="https://techequitycollaborative.org/people/catherine-bracy/">Catherine Bracy</a> and <a href="https://www.coricrider.com/">Cori Crider</a>.</p><p>If you have feedback about the episode or a pet subject that you might want to join forces to develop into an episode, please reach out. You can email <a href="mailto:team@saysmaybe.com">team@saysmaybe.com</a> or share an audio note here: <a href="http://speakpipe.com/saysmaybe">speakpipe.com/saysmaybe.</a></p><p>Further Reading</p><p><em>News Articles</em></p><ul><li><a href="https://time.com/6293271/tiktok-bytedance-kenya-moderator-lawsuit/"> Former TikTok Moderator Threatens Lawsuit in Kenya | TIME</a></li><li><a href="https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/"> Inside Facebook's African Sweatshop | TIME</a></li><li><a href="https://www.foxglove.org.uk/2023/10/18/kenya-facebook-moderators-talks-collapse/"> Back to court: Kenya Facebook content moderators settlement talks collapse - Foxglove</a></li><li><a href="https://time.com/6247678/openai-chatgpt-kenya-workers/">OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | TIME</a></li></ul><p><em>Other Links</em></p><ul><li><a href="https://yalebooks.yale.edu/9780300261479/behind-the-screen">Sarah Roberts: Behind the Screen</a></li><li><a href="https://techequitycollaborative.org/">TechEquity website</a></li><li><a href="https://www.foxglove.org.uk/">Foxglove website</a></li><li><a href="https://thesignalsnetwork.org/">Whistleblower Support Organization - The Signals Network</a></li><li><a href="https://www.tspa.org/">Trust &amp; Safety Professional Association (tspa.org)Trust &amp; Safety Professional Association (tspa.org)</a></li><li><a href="https://integrityinstitute.org/">Integrity Institute</a></li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Fri, 19 Apr 2024 14:18:00 +0000</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/98a09479/ad1684e5.mp3" length="33495751" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/-dNVPdR4aBy7lTljdvEN3pDHBV0x70ACpHEXSSAudqE/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81ZWMw/MTVjMmY1MmFkZWYz/MjQ3NjBjNWE2ZTcy/YjM4NS5wbmc.jpg"/>
      <itunes:duration>2392</itunes:duration>
      <itunes:summary>In this episode, we talk about the kinds of jobs that are being created as AI systems grow, how those jobs are evolving, what the labour conditions of those jobs are like and, who is benefitting from these systems.    This episode is hosted by Alix Dunn and guests include James (Mojez) Oyange, Yoel Roth, Catherine Bracy and Cori Crider. If you have feedback about the episode or a pet subject that you might want to join forces to develop into an episode, please reach out. You can email team@saysmaybe.com or share an audio note here: speakpipe.com/saysmaybe. Further Reading News Articles   Former TikTok Moderator Threatens Lawsuit in Kenya | TIME  Inside Facebook's African Sweatshop | TIME  Back to court: Kenya Facebook content moderators settlement talks collapse - Foxglove OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | TIME  Other Links  Sarah Roberts: Behind the Screen TechEquity website Foxglove website Whistleblower Support Organization - The Signals Network Trust &amp;amp; Safety Professional Association (tspa.org)Trust &amp;amp; Safety Professional Association (tspa.org) Integrity Institute   </itunes:summary>
      <itunes:subtitle>In this episode, we talk about the kinds of jobs that are being created as AI systems grow, how those jobs are evolving, what the labour conditions of those jobs are like and, who is benefitting from these systems.    This episode is hosted by Alix Dunn a</itunes:subtitle>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
    </item>
    <item>
      <title>2024 Elections: Is AI going to wreak havoc?</title>
      <itunes:episode>1</itunes:episode>
      <podcast:episode>1</podcast:episode>
      <itunes:title>2024 Elections: Is AI going to wreak havoc?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f76882f2-2095-4719-9255-42cb8412955e</guid>
      <link>https://share.transistor.fm/s/b16e9056</link>
      <description>
        <![CDATA[<p>In this episode, we walk through how misinformation and disinformation has been used in past elections to impact outcomes, where we think AI might make a material difference in how elections play out this year, and where we think responsibility lies for the situation we’re in.</p><p>This episode is hosted by <a href="https://www.alixdunn.com/">Alix Dunn</a> and <a href="https://www.linkedin.com/in/prathmjuneja/">Prathm Juneja</a>, and guests include <a href="https://www.witness.org/portfolio_page/sam-gregory/">Sam Gregory</a>, <a href="https://www.aspeninstitute.org/people/josh-lawson/">Josh Lawson</a>, and <a href="https://twitter.com/cward1e?lang=en">Claire Wardle</a>.</p><p>If you have feedback about the episode or a pet subject that you might want to join forces to develop into an episode, please reach out. You can email <a href="mailto:team@saysmaybe.com">team@saysmaybe.com</a> or share an audio note here: <a href="http://speakpipe.com/saysmaybe">speakpipe.com/saysmaybe</a></p><p>--</p><p>Further Reading</p><p><em>Academic Articles</em></p><ul><li><a href="https://zephoria.medium.com/kosa-isnt-designed-to-help-kids-335ab57cddae"> KOSA isn’t designed to help kids, by danah boyd</a></li><li><a href="https://www.danah.org/papers/2024/Techno-legal_Solutionism_PREPRINT.pdf"> Techno-legal Solutionism: Regulating Children’s Online Safety in the United States</a></li><li><a href="https://journals.sagepub.com/doi/10.1177/14614448231161880">A review and provocation: On polarization and platforms</a></li><li><a href="https://knightcolumbia.org/content/how-to-prepare-for-the-deluge-of-generative-ai-on-social-media"> How to Prepare for the Deluge of Generative AI on Social Media</a></li><li><a href="https://www.nature.com/articles/s41467-022-35576-9">Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior</a></li><li><a href="https://link.springer.com/article/10.1007/s11109-024-09911-3">Negative Downstream Effects of Alarmist Disinformation Discourse: Evidence from the United States | Political Behavior</a></li></ul><p><em>News Articles</em></p><ul><li><a href="https://www.brennancenter.org/our-work/analysis-opinion/voter-suppression-has-gone-digital"> Voter Suppression Has Gone Digital</a></li><li><a href="https://www.axios.com/2023/06/06/big-tech-misinformation-policies-2024-election"> Big Tech rolls back misinformation measures ahead of 2024</a></li><li><a href="https://www.freepress.net/big-tech-backslide-report">Big Tech Backslide: How Social-Media Rollbacks Endanger Democracy Ahead of the 2024 Elections</a></li><li><a href="https://www.brennancenter.org/our-work/court-cases/murthy-v-missouri-formerly-missouri-v-biden"> Murthy v. Missouri (Formerly Missouri v. Biden)</a></li><li><a href="https://edition.cnn.com/2024/02/08/tech/fcc-scam-robocalls-ai-generated-voices/index.html"> FCC votes to outlaw scam robocalls that use AI-generated voices</a></li></ul><p><em>Other Links</em></p><ul><li><a href="https://openai.com/blog/how-openai-is-approaching-2024-worldwide-elections"> How OpenAI is approaching 2024 worldwide elections</a></li><li><a href="https://www.oii.ox.ac.uk/news-events/how-data-and-artificial-intelligence-are-actually-transforming-american-elections/"> OII | How Data and Artificial Intelligence are Actually Transforming American Elections</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we walk through how misinformation and disinformation has been used in past elections to impact outcomes, where we think AI might make a material difference in how elections play out this year, and where we think responsibility lies for the situation we’re in.</p><p>This episode is hosted by <a href="https://www.alixdunn.com/">Alix Dunn</a> and <a href="https://www.linkedin.com/in/prathmjuneja/">Prathm Juneja</a>, and guests include <a href="https://www.witness.org/portfolio_page/sam-gregory/">Sam Gregory</a>, <a href="https://www.aspeninstitute.org/people/josh-lawson/">Josh Lawson</a>, and <a href="https://twitter.com/cward1e?lang=en">Claire Wardle</a>.</p><p>If you have feedback about the episode or a pet subject that you might want to join forces to develop into an episode, please reach out. You can email <a href="mailto:team@saysmaybe.com">team@saysmaybe.com</a> or share an audio note here: <a href="http://speakpipe.com/saysmaybe">speakpipe.com/saysmaybe</a></p><p>--</p><p>Further Reading</p><p><em>Academic Articles</em></p><ul><li><a href="https://zephoria.medium.com/kosa-isnt-designed-to-help-kids-335ab57cddae"> KOSA isn’t designed to help kids, by danah boyd</a></li><li><a href="https://www.danah.org/papers/2024/Techno-legal_Solutionism_PREPRINT.pdf"> Techno-legal Solutionism: Regulating Children’s Online Safety in the United States</a></li><li><a href="https://journals.sagepub.com/doi/10.1177/14614448231161880">A review and provocation: On polarization and platforms</a></li><li><a href="https://knightcolumbia.org/content/how-to-prepare-for-the-deluge-of-generative-ai-on-social-media"> How to Prepare for the Deluge of Generative AI on Social Media</a></li><li><a href="https://www.nature.com/articles/s41467-022-35576-9">Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior</a></li><li><a href="https://link.springer.com/article/10.1007/s11109-024-09911-3">Negative Downstream Effects of Alarmist Disinformation Discourse: Evidence from the United States | Political Behavior</a></li></ul><p><em>News Articles</em></p><ul><li><a href="https://www.brennancenter.org/our-work/analysis-opinion/voter-suppression-has-gone-digital"> Voter Suppression Has Gone Digital</a></li><li><a href="https://www.axios.com/2023/06/06/big-tech-misinformation-policies-2024-election"> Big Tech rolls back misinformation measures ahead of 2024</a></li><li><a href="https://www.freepress.net/big-tech-backslide-report">Big Tech Backslide: How Social-Media Rollbacks Endanger Democracy Ahead of the 2024 Elections</a></li><li><a href="https://www.brennancenter.org/our-work/court-cases/murthy-v-missouri-formerly-missouri-v-biden"> Murthy v. Missouri (Formerly Missouri v. Biden)</a></li><li><a href="https://edition.cnn.com/2024/02/08/tech/fcc-scam-robocalls-ai-generated-voices/index.html"> FCC votes to outlaw scam robocalls that use AI-generated voices</a></li></ul><p><em>Other Links</em></p><ul><li><a href="https://openai.com/blog/how-openai-is-approaching-2024-worldwide-elections"> How OpenAI is approaching 2024 worldwide elections</a></li><li><a href="https://www.oii.ox.ac.uk/news-events/how-data-and-artificial-intelligence-are-actually-transforming-american-elections/"> OII | How Data and Artificial Intelligence are Actually Transforming American Elections</a></li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 13 Feb 2024 15:20:00 -0100</pubDate>
      <author>Alix Dunn</author>
      <enclosure url="https://media.transistor.fm/b16e9056/f0c439ef.mp3" length="59507911" type="audio/mpeg"/>
      <itunes:author>Alix Dunn</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/iURYDfnRgWP_e9JK9OF64StpOPvtP4MXp9JTw9gWrdM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mMDhk/YTRkNWVhZWE5NTll/NmJiNmFkZGY5YzU0/NzJmZC5wbmc.jpg"/>
      <itunes:duration>2125</itunes:duration>
      <itunes:summary>In this episode, we walk through how misinformation and disinformation has been used in past elections to impact outcomes, where we think AI might make a material difference in how elections play out this year, and where we think responsibility lies for the situation we’re in. This episode is hosted by Alix Dunn and Prathm Juneja, and guests include Sam Gregory, Josh Lawson, and Claire Wardle. If you have feedback about the episode or a pet subject that you might want to join forces to develop into an episode, please reach out. You can email team@saysmaybe.com or share an audio note here: speakpipe.com/saysmaybe   -- Further Reading Academic Articles   KOSA isn’t designed to help kids, by danah boyd  Techno-legal Solutionism: Regulating Children’s Online Safety in the United States A review and provocation: On polarization and platforms  How to Prepare for the Deluge of Generative AI on Social Media Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior Negative Downstream Effects of Alarmist Disinformation Discourse: Evidence from the United States | Political Behavior  News Articles   Voter Suppression Has Gone Digital  Big Tech rolls back misinformation measures ahead of 2024 Big Tech Backslide: How Social-Media Rollbacks Endanger Democracy Ahead of the 2024 Elections  Murthy v. Missouri (Formerly Missouri v. Biden)  FCC votes to outlaw scam robocalls that use AI-generated voices  Other Links   How OpenAI is approaching 2024 worldwide elections  OII | How Data and Artificial Intelligence are Actually Transforming American Elections  </itunes:summary>
      <itunes:subtitle>In this episode, we walk through how misinformation and disinformation has been used in past elections to impact outcomes, where we think AI might make a material difference in how elections play out this year, and where we think responsibility lies for t</itunes:subtitle>
      <itunes:keywords></itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
  </channel>
</rss>
