<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/stylesheet.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link rel="self" type="application/rss+xml" href="https://feeds.transistor.fm/muckraikers" title="MP3 Audio"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    <podcast:podping usesPodping="true"/>
    <title>muckrAIkers</title>
    <generator>Transistor (https://transistor.fm)</generator>
    <itunes:new-feed-url>https://feeds.transistor.fm/muckraikers</itunes:new-feed-url>
    <description>Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.</description>
    <copyright>© Kairos.fm</copyright>
    <podcast:guid>da47a1db-11d7-5b77-8a18-e17bfcbc41fd</podcast:guid>
    <podcast:locked>yes</podcast:locked>
    <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
    <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
    <language>en</language>
    <pubDate>Mon, 13 Apr 2026 11:00:08 -0600</pubDate>
    <lastBuildDate>Mon, 13 Apr 2026 11:01:05 -0600</lastBuildDate>
    <link>https://kairos.fm/muckraikers/</link>
    
    <itunes:category text="Technology"/>
    <itunes:category text="Science">
      <itunes:category text="Mathematics"/>
    </itunes:category>
    <itunes:type>episodic</itunes:type>
    <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
    <itunes:image href="https://img.transistorcdn.com/2Xoj9Q8V0g3EZ4QkfrR-DMBxWLBu5eO1bikePOFS7Ng/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83MzQ3/NzVkMTk3MWFiM2Nl/ZDRmNzFhMGQxZDQ3/MzM3Yi5wbmc.jpg"/>
    <itunes:summary>Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.</itunes:summary>
    <itunes:subtitle>Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more.</itunes:subtitle>
    <itunes:keywords>AI, artificial intelligence</itunes:keywords>
    <itunes:owner>
      <itunes:name>Jacob Haimes</itunes:name>
      <itunes:email>listen@kairos.fm</itunes:email>
    </itunes:owner>
    <itunes:complete>No</itunes:complete>
    <itunes:explicit>Yes</itunes:explicit>
    <item>
      <title>The Enemy of My Enemy is Still a Corporation</title>
      <itunes:episode>22</itunes:episode>
      <podcast:episode>22</podcast:episode>
      <itunes:title>The Enemy of My Enemy is Still a Corporation</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">aed15123-63b2-431a-b2f0-3088fe57fcce</guid>
      <link>https://kairos.fm/muckraikers/e022</link>
      <description>
        <![CDATA[<p>In this episode, Jacob and Igor break down the DoD vs. Anthropic standoff, tracing how Claude's use in military operations led to Anthropic being designated a supply chain security risk. Perhaps more importantly, why did Anthropic choose to take a stand <em>now</em>, and what can that tell us about the corporations behavior moving forward. The investigation is used as a case study in how to read the real motivations behind big institutions: intrinsic values, rational self-interest, and realpolitik.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Introduction</li>
<li>(00:44) - DoD-Anthropic Standoff</li>
<li>(15:08) - 3 Buckets of Motivation</li>
<li>(35:39) - How to Read What They're Actually Doing</li>
<li>(44:04) - Is This Designation Even Real?</li>
<li>(53:39) - Recap (Pragmatist's Playbook)</li>
</ul><br><strong>Critical Links</strong><br><em>Below are the most important links for this episode. For more, visit the episode </em><a href="https://kairos.fm/muckraikers/e022"><em>page</em></a><em> on Kairos.fm.</em><ul><li>Ed Zitron <a href="https://www.wheresyoured.at/the-ai-bubble-is-an-information-war/">subtack article</a> - The AI Bubble Is An Information War</li><li>TechPolicy.Press <a href="https://www.techpolicy.press/a-timeline-of-the-anthropic-pentagon-dispute/">article</a> - A Timeline of the Anthropic-Pentagon Dispute</li><li>TechPolicy.Press <a href="https://www.techpolicy.press/how-to-think-about-the-anthropic-pentagon-dispute/">podcast episode</a> - How to Think About the Anthropic-Pentagon Dispute</li><li>BBC <a href="https://www.bbc.com/news/articles/c4g7k7zdd0zo">news article</a> - Anthropic vows to sue Pentagon over supply chain risk label</li><li>The Guardian <a href="https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying">article</a> - AI got the blame for the Iran school bombing. The truth is far more worrying</li><li>Palantir <a href="https://www.youtube.com/watch?is=2dar2jObPe6emDLQ&amp;v=yrtDgoqWmgM&amp;feature=youtu.be">YouTube video</a> - Multi-Domain AI: The Future of Command and Control</li><li>Into AI Safety <a href="https://kairos.fm/intoaisafety/e029">podcast episode</a> - Drawing Red Lines w/ Su Cizem</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Jacob and Igor break down the DoD vs. Anthropic standoff, tracing how Claude's use in military operations led to Anthropic being designated a supply chain security risk. Perhaps more importantly, why did Anthropic choose to take a stand <em>now</em>, and what can that tell us about the corporations behavior moving forward. The investigation is used as a case study in how to read the real motivations behind big institutions: intrinsic values, rational self-interest, and realpolitik.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Introduction</li>
<li>(00:44) - DoD-Anthropic Standoff</li>
<li>(15:08) - 3 Buckets of Motivation</li>
<li>(35:39) - How to Read What They're Actually Doing</li>
<li>(44:04) - Is This Designation Even Real?</li>
<li>(53:39) - Recap (Pragmatist's Playbook)</li>
</ul><br><strong>Critical Links</strong><br><em>Below are the most important links for this episode. For more, visit the episode </em><a href="https://kairos.fm/muckraikers/e022"><em>page</em></a><em> on Kairos.fm.</em><ul><li>Ed Zitron <a href="https://www.wheresyoured.at/the-ai-bubble-is-an-information-war/">subtack article</a> - The AI Bubble Is An Information War</li><li>TechPolicy.Press <a href="https://www.techpolicy.press/a-timeline-of-the-anthropic-pentagon-dispute/">article</a> - A Timeline of the Anthropic-Pentagon Dispute</li><li>TechPolicy.Press <a href="https://www.techpolicy.press/how-to-think-about-the-anthropic-pentagon-dispute/">podcast episode</a> - How to Think About the Anthropic-Pentagon Dispute</li><li>BBC <a href="https://www.bbc.com/news/articles/c4g7k7zdd0zo">news article</a> - Anthropic vows to sue Pentagon over supply chain risk label</li><li>The Guardian <a href="https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying">article</a> - AI got the blame for the Iran school bombing. The truth is far more worrying</li><li>Palantir <a href="https://www.youtube.com/watch?is=2dar2jObPe6emDLQ&amp;v=yrtDgoqWmgM&amp;feature=youtu.be">YouTube video</a> - Multi-Domain AI: The Future of Command and Control</li><li>Into AI Safety <a href="https://kairos.fm/intoaisafety/e029">podcast episode</a> - Drawing Red Lines w/ Su Cizem</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 13 Apr 2026 11:00:00 -0600</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/3cb922a8/4f2e1cbe.mp3" length="111699711" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>3488</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Jacob and Igor break down the DoD vs. Anthropic standoff, tracing how Claude's use in military operations led to Anthropic being designated a supply chain security risk. Perhaps more importantly, why did Anthropic choose to take a stand <em>now</em>, and what can that tell us about the corporations behavior moving forward. The investigation is used as a case study in how to read the real motivations behind big institutions: intrinsic values, rational self-interest, and realpolitik.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Introduction</li>
<li>(00:44) - DoD-Anthropic Standoff</li>
<li>(15:08) - 3 Buckets of Motivation</li>
<li>(35:39) - How to Read What They're Actually Doing</li>
<li>(44:04) - Is This Designation Even Real?</li>
<li>(53:39) - Recap (Pragmatist's Playbook)</li>
</ul><br><strong>Critical Links</strong><br><em>Below are the most important links for this episode. For more, visit the episode </em><a href="https://kairos.fm/muckraikers/e022"><em>page</em></a><em> on Kairos.fm.</em><ul><li>Ed Zitron <a href="https://www.wheresyoured.at/the-ai-bubble-is-an-information-war/">subtack article</a> - The AI Bubble Is An Information War</li><li>TechPolicy.Press <a href="https://www.techpolicy.press/a-timeline-of-the-anthropic-pentagon-dispute/">article</a> - A Timeline of the Anthropic-Pentagon Dispute</li><li>TechPolicy.Press <a href="https://www.techpolicy.press/how-to-think-about-the-anthropic-pentagon-dispute/">podcast episode</a> - How to Think About the Anthropic-Pentagon Dispute</li><li>BBC <a href="https://www.bbc.com/news/articles/c4g7k7zdd0zo">news article</a> - Anthropic vows to sue Pentagon over supply chain risk label</li><li>The Guardian <a href="https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying">article</a> - AI got the blame for the Iran school bombing. The truth is far more worrying</li><li>Palantir <a href="https://www.youtube.com/watch?is=2dar2jObPe6emDLQ&amp;v=yrtDgoqWmgM&amp;feature=youtu.be">YouTube video</a> - Multi-Domain AI: The Future of Command and Control</li><li>Into AI Safety <a href="https://kairos.fm/intoaisafety/e029">podcast episode</a> - Drawing Red Lines w/ Su Cizem</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Anthropic, DoD, Department of Defense, Hegseth, Palantir, Project Maven, AI warfare, autonomous weapons, military AI, supply chain risk, defense production act, AI ethics, responsible AI, corporate accountability, credible commitments, AI governance, AI policy, AI skepticism, critical AI analysis, Big Tech accountability, Iran, US military, surveillance, media literacy, systems thinking, corporate behavior, Claude AI, OpenAI, AI safety, tech and politics, AI, artificial intelligence, Department of War, DoW</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:person role="Editor" href="https://muckraikers.transistor.fm/people/chase-precopia">Chase Precopia</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/3cb922a8/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/3cb922a8/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>The Mythical AI Bear</title>
      <itunes:episode>21</itunes:episode>
      <podcast:episode>21</podcast:episode>
      <itunes:title>The Mythical AI Bear</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">064afb70-8677-46e1-aa5c-145ee8222571</guid>
      <link>https://kairos.fm/muckraikers/e021</link>
      <description>
        <![CDATA[<p>This week, Jacob and Igor dissect the "mythical AI bear," the strawman version of AI criticism that gets thrown around in tech discourse. Working through a viral blog post that typifies the genre, they examine how legitimate concerns about code quality, labor displacement, intellectual property, and the erosion of craft get flattened into caricature. Plus: Sam Altman writes ten paragraphs about how unbothered he is by an ad.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Introduction</li>
<li>(00:30) - Altman's Super Bowl Meltdown</li>
<li>(03:11) - What is "The Bear"?</li>
<li>(06:41) - But You Have No Idea What The Code Is</li>
<li>(15:44) - But The Craft, But The Mediocrity, But It'll Never Be AGI</li>
<li>(24:43) - But They Take Our Jobs &amp; But The Plagiarism</li>
<li>(31:21) - Stochastic Parrots &amp; Mythical Bears</li>
<li>(42:34) - Outro</li>
</ul><br><strong>Critical Links<br></strong><em>Below are the most important links for this episode. For more, visit the episode </em><a href="https://kairos.fm/muckraikers/e021"><em>page</em></a><em> on Kairos.fm.</em><ul><li>Big Think <a href="https://bigthink.com/the-present/the-rise-of-ai-denialism">article</a> - The rise of AI denialism</li><li>Fly.io <a href="https://fly.io/blog/youre-all-nuts/">blogpost</a> - My AI Skeptic Friends Are All Nuts</li><li>antirez <a href="https://antirez.com/news/158">blogpost</a> - Don't fall into the anti-AI hype</li><li>Emily Bender <a href="https://buttondown.com/maiht3k/archive/resistance-isnt-denialism/">blogpost</a> - Resistance Isn't Denialism</li><li>Cory Doctorow <a href="https://doctorow.medium.com/https-pluralistic-net-2025-09-11-vulgar-thatcherism-there-is-an-alternative-f1428b42a8fd">blogpost</a> - Reverse centaurs are the answer to the AI paradox</li><li>Washington Post <a href="https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/">article</a> - The AI boom is so huge it's causing shortages everywhere else</li><li>Business Insider <a href="https://www.businessinsider.com/jeremy-grantham-ai-bubble-nvidia-tech-stocks-stock-market-crash-2026-1">article</a> - Veteran investor Jeremy Grantham says AI is 'obviously a bubble'</li><li>Ipsos <a href="https://www.ipsos.com/en-us/google-ipsos-multi-country-ai-survey-2026">survey</a> - Google / Ipsos Multi-Country AI Survey 2026</li><li>Understanding AI <a href="https://www.understandingai.org/p/ai-skeptics-and-ai-boosters-are-both">Substack post</a> - AI skeptics and AI boosters are both wrong</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week, Jacob and Igor dissect the "mythical AI bear," the strawman version of AI criticism that gets thrown around in tech discourse. Working through a viral blog post that typifies the genre, they examine how legitimate concerns about code quality, labor displacement, intellectual property, and the erosion of craft get flattened into caricature. Plus: Sam Altman writes ten paragraphs about how unbothered he is by an ad.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Introduction</li>
<li>(00:30) - Altman's Super Bowl Meltdown</li>
<li>(03:11) - What is "The Bear"?</li>
<li>(06:41) - But You Have No Idea What The Code Is</li>
<li>(15:44) - But The Craft, But The Mediocrity, But It'll Never Be AGI</li>
<li>(24:43) - But They Take Our Jobs &amp; But The Plagiarism</li>
<li>(31:21) - Stochastic Parrots &amp; Mythical Bears</li>
<li>(42:34) - Outro</li>
</ul><br><strong>Critical Links<br></strong><em>Below are the most important links for this episode. For more, visit the episode </em><a href="https://kairos.fm/muckraikers/e021"><em>page</em></a><em> on Kairos.fm.</em><ul><li>Big Think <a href="https://bigthink.com/the-present/the-rise-of-ai-denialism">article</a> - The rise of AI denialism</li><li>Fly.io <a href="https://fly.io/blog/youre-all-nuts/">blogpost</a> - My AI Skeptic Friends Are All Nuts</li><li>antirez <a href="https://antirez.com/news/158">blogpost</a> - Don't fall into the anti-AI hype</li><li>Emily Bender <a href="https://buttondown.com/maiht3k/archive/resistance-isnt-denialism/">blogpost</a> - Resistance Isn't Denialism</li><li>Cory Doctorow <a href="https://doctorow.medium.com/https-pluralistic-net-2025-09-11-vulgar-thatcherism-there-is-an-alternative-f1428b42a8fd">blogpost</a> - Reverse centaurs are the answer to the AI paradox</li><li>Washington Post <a href="https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/">article</a> - The AI boom is so huge it's causing shortages everywhere else</li><li>Business Insider <a href="https://www.businessinsider.com/jeremy-grantham-ai-bubble-nvidia-tech-stocks-stock-market-crash-2026-1">article</a> - Veteran investor Jeremy Grantham says AI is 'obviously a bubble'</li><li>Ipsos <a href="https://www.ipsos.com/en-us/google-ipsos-multi-country-ai-survey-2026">survey</a> - Google / Ipsos Multi-Country AI Survey 2026</li><li>Understanding AI <a href="https://www.understandingai.org/p/ai-skeptics-and-ai-boosters-are-both">Substack post</a> - AI skeptics and AI boosters are both wrong</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 17 Mar 2026 12:00:00 -0600</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/c3b2220c/606ab054.mp3" length="82977728" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>2591</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week, Jacob and Igor dissect the "mythical AI bear," the strawman version of AI criticism that gets thrown around in tech discourse. Working through a viral blog post that typifies the genre, they examine how legitimate concerns about code quality, labor displacement, intellectual property, and the erosion of craft get flattened into caricature. Plus: Sam Altman writes ten paragraphs about how unbothered he is by an ad.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Introduction</li>
<li>(00:30) - Altman's Super Bowl Meltdown</li>
<li>(03:11) - What is "The Bear"?</li>
<li>(06:41) - But You Have No Idea What The Code Is</li>
<li>(15:44) - But The Craft, But The Mediocrity, But It'll Never Be AGI</li>
<li>(24:43) - But They Take Our Jobs &amp; But The Plagiarism</li>
<li>(31:21) - Stochastic Parrots &amp; Mythical Bears</li>
<li>(42:34) - Outro</li>
</ul><br><strong>Critical Links<br></strong><em>Below are the most important links for this episode. For more, visit the episode </em><a href="https://kairos.fm/muckraikers/e021"><em>page</em></a><em> on Kairos.fm.</em><ul><li>Big Think <a href="https://bigthink.com/the-present/the-rise-of-ai-denialism">article</a> - The rise of AI denialism</li><li>Fly.io <a href="https://fly.io/blog/youre-all-nuts/">blogpost</a> - My AI Skeptic Friends Are All Nuts</li><li>antirez <a href="https://antirez.com/news/158">blogpost</a> - Don't fall into the anti-AI hype</li><li>Emily Bender <a href="https://buttondown.com/maiht3k/archive/resistance-isnt-denialism/">blogpost</a> - Resistance Isn't Denialism</li><li>Cory Doctorow <a href="https://doctorow.medium.com/https-pluralistic-net-2025-09-11-vulgar-thatcherism-there-is-an-alternative-f1428b42a8fd">blogpost</a> - Reverse centaurs are the answer to the AI paradox</li><li>Washington Post <a href="https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/">article</a> - The AI boom is so huge it's causing shortages everywhere else</li><li>Business Insider <a href="https://www.businessinsider.com/jeremy-grantham-ai-bubble-nvidia-tech-stocks-stock-market-crash-2026-1">article</a> - Veteran investor Jeremy Grantham says AI is 'obviously a bubble'</li><li>Ipsos <a href="https://www.ipsos.com/en-us/google-ipsos-multi-country-ai-survey-2026">survey</a> - Google / Ipsos Multi-Country AI Survey 2026</li><li>Understanding AI <a href="https://www.understandingai.org/p/ai-skeptics-and-ai-boosters-are-both">Substack post</a> - AI skeptics and AI boosters are both wrong</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:person role="Editor" href="https://muckraikers.transistor.fm/people/chase-precopia">Chase Precopia</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/c3b2220c/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/c3b2220c/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Big Tech Plans to Move Fast and Break Democracy</title>
      <itunes:episode>20</itunes:episode>
      <podcast:episode>20</podcast:episode>
      <itunes:title>Big Tech Plans to Move Fast and Break Democracy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4e81cec6-39a1-4929-8c28-9d8ad61bebc3</guid>
      <link>https://kairos.fm/muckraikers/e020</link>
      <description>
        <![CDATA[<p>We're talking about developments in AI while those in power have unapologetically revealed their true fascist intensions; are we spending our time in the right way? Igor and I discuss the importance of shining a light on the techno-authoritarians who have played a very significant role in current state-of-the-world.</p><p>While we discuss the murders of Nicole Good and Alex Pretti during this episode, it's important that we also acknowledge the many marginalized people who have died as a result of ICE's behavior, and they same level of outcry didn't happen. Six additional individuals died in ICE custody under suspicious circumstances between January 1st and 25th of 2026: Victor Manuel Díaz, Geraldo Lunas Campos, Luis Gustavo Núñez Cáceres, Luis Beltrán Yáñez-Cruz, Parady La, and Heber Sánchez Domínguez.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - | Introduction</li>
<li>(03:57) - | The Authoritarian Stack</li>
<li>(08:33) - | Palantir &amp; Theil-Government Consolidation</li>
<li>(13:44) - | Move Fast &amp; Break Everything</li>
<li>(23:14) - | Fascism in the US &amp; Starving the Beast</li>
<li>(39:48) - | Finding Local Opportunities for Action</li>
</ul><br><strong>Critical Links<br></strong><em>Below are the most important links for this episode. For more, visit the episode </em><a href="https://kairos.fm/muckraikers/e020"><em>page</em></a><em> on Kairos.fm.</em><ul><li>The Authoritarian Stack <a href="https://www.authoritarian-stack.info/">website</a></li><li>Project 2025 Observer <a href="https://www.project2025.observer/en">website</a></li><li>EFF <a href="https://www.eff.org/deeplinks/2026/01/report-ice-using-palantir-tool-feeds-medicaid-data">report</a> - ICE Using Palantir Tool Feeds on Medicaid Data</li><li>The Guardian <a href="https://www.theguardian.com/us-news/2026/jan/28/deaths-ice-2026-">article</a> - Eight people have died in dealings with ICE so far in 2026. These are their stories</li><li>Indivisible <a href="https://indivisible.org/">website</a></li><li>Distributed AI Research Institute <a href="https://www.dair-institute.org/projects/">projects</a></li><li>EAAMO <a href="https://www.eaamo.org/">website</a> - Mechanism Design for Social Good</li><li>Carlos Maza <a href="https://www.youtube.com/watch?v=iJaE_BvLK6U">video</a> - How To Be Hopeless</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>We're talking about developments in AI while those in power have unapologetically revealed their true fascist intensions; are we spending our time in the right way? Igor and I discuss the importance of shining a light on the techno-authoritarians who have played a very significant role in current state-of-the-world.</p><p>While we discuss the murders of Nicole Good and Alex Pretti during this episode, it's important that we also acknowledge the many marginalized people who have died as a result of ICE's behavior, and they same level of outcry didn't happen. Six additional individuals died in ICE custody under suspicious circumstances between January 1st and 25th of 2026: Victor Manuel Díaz, Geraldo Lunas Campos, Luis Gustavo Núñez Cáceres, Luis Beltrán Yáñez-Cruz, Parady La, and Heber Sánchez Domínguez.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - | Introduction</li>
<li>(03:57) - | The Authoritarian Stack</li>
<li>(08:33) - | Palantir &amp; Theil-Government Consolidation</li>
<li>(13:44) - | Move Fast &amp; Break Everything</li>
<li>(23:14) - | Fascism in the US &amp; Starving the Beast</li>
<li>(39:48) - | Finding Local Opportunities for Action</li>
</ul><br><strong>Critical Links<br></strong><em>Below are the most important links for this episode. For more, visit the episode </em><a href="https://kairos.fm/muckraikers/e020"><em>page</em></a><em> on Kairos.fm.</em><ul><li>The Authoritarian Stack <a href="https://www.authoritarian-stack.info/">website</a></li><li>Project 2025 Observer <a href="https://www.project2025.observer/en">website</a></li><li>EFF <a href="https://www.eff.org/deeplinks/2026/01/report-ice-using-palantir-tool-feeds-medicaid-data">report</a> - ICE Using Palantir Tool Feeds on Medicaid Data</li><li>The Guardian <a href="https://www.theguardian.com/us-news/2026/jan/28/deaths-ice-2026-">article</a> - Eight people have died in dealings with ICE so far in 2026. These are their stories</li><li>Indivisible <a href="https://indivisible.org/">website</a></li><li>Distributed AI Research Institute <a href="https://www.dair-institute.org/projects/">projects</a></li><li>EAAMO <a href="https://www.eaamo.org/">website</a> - Mechanism Design for Social Good</li><li>Carlos Maza <a href="https://www.youtube.com/watch?v=iJaE_BvLK6U">video</a> - How To Be Hopeless</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 09 Feb 2026 14:00:00 -0700</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/20244848/0cebc071.mp3" length="95075968" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>2969</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>We're talking about developments in AI while those in power have unapologetically revealed their true fascist intensions; are we spending our time in the right way? Igor and I discuss the importance of shining a light on the techno-authoritarians who have played a very significant role in current state-of-the-world.</p><p>While we discuss the murders of Nicole Good and Alex Pretti during this episode, it's important that we also acknowledge the many marginalized people who have died as a result of ICE's behavior, and they same level of outcry didn't happen. Six additional individuals died in ICE custody under suspicious circumstances between January 1st and 25th of 2026: Victor Manuel Díaz, Geraldo Lunas Campos, Luis Gustavo Núñez Cáceres, Luis Beltrán Yáñez-Cruz, Parady La, and Heber Sánchez Domínguez.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - | Introduction</li>
<li>(03:57) - | The Authoritarian Stack</li>
<li>(08:33) - | Palantir &amp; Theil-Government Consolidation</li>
<li>(13:44) - | Move Fast &amp; Break Everything</li>
<li>(23:14) - | Fascism in the US &amp; Starving the Beast</li>
<li>(39:48) - | Finding Local Opportunities for Action</li>
</ul><br><strong>Critical Links<br></strong><em>Below are the most important links for this episode. For more, visit the episode </em><a href="https://kairos.fm/muckraikers/e020"><em>page</em></a><em> on Kairos.fm.</em><ul><li>The Authoritarian Stack <a href="https://www.authoritarian-stack.info/">website</a></li><li>Project 2025 Observer <a href="https://www.project2025.observer/en">website</a></li><li>EFF <a href="https://www.eff.org/deeplinks/2026/01/report-ice-using-palantir-tool-feeds-medicaid-data">report</a> - ICE Using Palantir Tool Feeds on Medicaid Data</li><li>The Guardian <a href="https://www.theguardian.com/us-news/2026/jan/28/deaths-ice-2026-">article</a> - Eight people have died in dealings with ICE so far in 2026. These are their stories</li><li>Indivisible <a href="https://indivisible.org/">website</a></li><li>Distributed AI Research Institute <a href="https://www.dair-institute.org/projects/">projects</a></li><li>EAAMO <a href="https://www.eaamo.org/">website</a> - Mechanism Design for Social Good</li><li>Carlos Maza <a href="https://www.youtube.com/watch?v=iJaE_BvLK6U">video</a> - How To Be Hopeless</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, authoritarian stack, peter thiel, jd vance, techno-authoritarianism, techno authoritarianism, authoritarianism, fascism, Minneapolis, Minnesota, ICE, ICE AI, ICE overreach, Trump, AI safety, AI persecution, Renee Good, Alex Pretti, techno-fascism, palantir, AI fascists, big tech, techno-oligarchy</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:person role="Editor" href="https://muckraikers.transistor.fm/people/chase-precopia">Chase Precopia</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/20244848/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/20244848/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>AI Skeptic PWNED by Facts and Logic</title>
      <itunes:episode>19</itunes:episode>
      <podcast:episode>19</podcast:episode>
      <itunes:title>AI Skeptic PWNED by Facts and Logic</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d0379f25-8910-4f0c-a878-1dea28cdca3a</guid>
      <link>https://kairos.fm/muckraikers/e019</link>
      <description>
        <![CDATA[<p>Igor shares a significant shift in his perspective on AI coding tools after experiencing the latest Claude Code release. While he's been the stronger AI skeptic between the two of us, recent developments have shown him genuine utility in specific coding tasks, but this doesn't validate the hype or change the fundamental critiques.</p><p>We discuss what "rote tasks" are and why they're now automatable with enough investment, the difference between genuine utility and AGI claims, and why this update actually impacts our bubble analysis. We explore how massive investment has finally produced something useful for a narrow domain, but it doesn't mean the technology is generalizable or that AGI is real.</p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - | Introduction</li>
<li>(05:07) - | What Changed Igor’s Mind</li>
<li>(18:27) - | Rote Tasks Explained</li>
<li>(23:31) - | How Does This Impact our Bubble Analysis?</li>
<li>(30:48) - | AGI Is Still BS</li>
<li>(34:07) - | Externalities Remain Unchanged</li>
<li>(37:49) - | Final Thoughts &amp; Outro</li>
</ul><br><strong>Links</strong><ul><li>Related muckrAIkers <a href="https://kairos.fm/muckraikers/e018">episode</a> - Tech Bros Love AI Waifus</li></ul><p><strong>Bubble Talk</strong></p><ul><li>OfficeChai <a href="https://officechai.com/ai/openai-hasnt-completed-a-successful-full-scale-pretraining-run-since-gpt-4o-in-may-2024-says-semianalysis/">startup</a> - OpenAI Hasn’t Completed A Successful Full-Scale Pretraining Run Since GPT-4o In May 2024, Says SemiAnalysis</li><li>Vechron <a href="https://vechron.com/2025/12/anthropic-hires-wilson-sonsini-ipo-2026-openai-race/">report</a> - Anthropic Prepares for Potential 2026 IPO in Bid to Rival OpenAI</li><li>YCombinator Forum <a href="https://news.ycombinator.com/item?id=46218419">post</a> on AI crash</li><li>YCombinator Forum <a href="https://news.ycombinator.com/item?id=46250332">post</a> on OpenAI adopting Anthropic's "skills"</li><li>YCombinator Forum <a href="https://news.ycombinator.com/item?id=46126750">post</a> on OpenAI rumors</li><li>YCombinator Forum <a href="https://news.ycombinator.com/item?id=46182582">post</a> on OpenAI add suggestions</li></ul><p><strong>Other Sources</strong></p><ul><li>LinkedIn <a href="https://www.linkedin.com/feed/update/urn:li:activity:7405638898338537472/">post</a> discussing an agentic coding vibe shift</li><li><a href="https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/">Executive Order</a> - Ensuring a National Policy Framework for Artificial Intelligence</li><li>Inside Tech Law <a href="https://www.insidetechlaw.com/blog/2025/11/germany-delivers-landmark-copyright-ruling-against-openai-what-it-means-for-ai-and-ip">blogpost</a> - Germany delivers landmark copyright ruling against OpenAI: What it means for AI and IP</li><li>NeurIPS 2025 <a href="https://arxiv.org/abs/2509.26427">paper</a> - Ascent Fails to Forget</li><li>NBER <a href="https://www.nber.org/papers/w33777">working paper</a> - Large Language Models, Small Labor Market Effects</li><li>Dwarkesh Podcast <a href="https://www.dwarkesh.com/p/bits-per-sample">blogpost</a> - RL is even more information inefficient than you thought</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Igor shares a significant shift in his perspective on AI coding tools after experiencing the latest Claude Code release. While he's been the stronger AI skeptic between the two of us, recent developments have shown him genuine utility in specific coding tasks, but this doesn't validate the hype or change the fundamental critiques.</p><p>We discuss what "rote tasks" are and why they're now automatable with enough investment, the difference between genuine utility and AGI claims, and why this update actually impacts our bubble analysis. We explore how massive investment has finally produced something useful for a narrow domain, but it doesn't mean the technology is generalizable or that AGI is real.</p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - | Introduction</li>
<li>(05:07) - | What Changed Igor’s Mind</li>
<li>(18:27) - | Rote Tasks Explained</li>
<li>(23:31) - | How Does This Impact our Bubble Analysis?</li>
<li>(30:48) - | AGI Is Still BS</li>
<li>(34:07) - | Externalities Remain Unchanged</li>
<li>(37:49) - | Final Thoughts &amp; Outro</li>
</ul><br><strong>Links</strong><ul><li>Related muckrAIkers <a href="https://kairos.fm/muckraikers/e018">episode</a> - Tech Bros Love AI Waifus</li></ul><p><strong>Bubble Talk</strong></p><ul><li>OfficeChai <a href="https://officechai.com/ai/openai-hasnt-completed-a-successful-full-scale-pretraining-run-since-gpt-4o-in-may-2024-says-semianalysis/">startup</a> - OpenAI Hasn’t Completed A Successful Full-Scale Pretraining Run Since GPT-4o In May 2024, Says SemiAnalysis</li><li>Vechron <a href="https://vechron.com/2025/12/anthropic-hires-wilson-sonsini-ipo-2026-openai-race/">report</a> - Anthropic Prepares for Potential 2026 IPO in Bid to Rival OpenAI</li><li>YCombinator Forum <a href="https://news.ycombinator.com/item?id=46218419">post</a> on AI crash</li><li>YCombinator Forum <a href="https://news.ycombinator.com/item?id=46250332">post</a> on OpenAI adopting Anthropic's "skills"</li><li>YCombinator Forum <a href="https://news.ycombinator.com/item?id=46126750">post</a> on OpenAI rumors</li><li>YCombinator Forum <a href="https://news.ycombinator.com/item?id=46182582">post</a> on OpenAI add suggestions</li></ul><p><strong>Other Sources</strong></p><ul><li>LinkedIn <a href="https://www.linkedin.com/feed/update/urn:li:activity:7405638898338537472/">post</a> discussing an agentic coding vibe shift</li><li><a href="https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/">Executive Order</a> - Ensuring a National Policy Framework for Artificial Intelligence</li><li>Inside Tech Law <a href="https://www.insidetechlaw.com/blog/2025/11/germany-delivers-landmark-copyright-ruling-against-openai-what-it-means-for-ai-and-ip">blogpost</a> - Germany delivers landmark copyright ruling against OpenAI: What it means for AI and IP</li><li>NeurIPS 2025 <a href="https://arxiv.org/abs/2509.26427">paper</a> - Ascent Fails to Forget</li><li>NBER <a href="https://www.nber.org/papers/w33777">working paper</a> - Large Language Models, Small Labor Market Effects</li><li>Dwarkesh Podcast <a href="https://www.dwarkesh.com/p/bits-per-sample">blogpost</a> - RL is even more information inefficient than you thought</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 12 Jan 2026 09:00:00 -0700</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/420d029d/9c3dd62e.mp3" length="74687092" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>2332</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Igor shares a significant shift in his perspective on AI coding tools after experiencing the latest Claude Code release. While he's been the stronger AI skeptic between the two of us, recent developments have shown him genuine utility in specific coding tasks, but this doesn't validate the hype or change the fundamental critiques.</p><p>We discuss what "rote tasks" are and why they're now automatable with enough investment, the difference between genuine utility and AGI claims, and why this update actually impacts our bubble analysis. We explore how massive investment has finally produced something useful for a narrow domain, but it doesn't mean the technology is generalizable or that AGI is real.</p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - | Introduction</li>
<li>(05:07) - | What Changed Igor’s Mind</li>
<li>(18:27) - | Rote Tasks Explained</li>
<li>(23:31) - | How Does This Impact our Bubble Analysis?</li>
<li>(30:48) - | AGI Is Still BS</li>
<li>(34:07) - | Externalities Remain Unchanged</li>
<li>(37:49) - | Final Thoughts &amp; Outro</li>
</ul><br><strong>Links</strong><ul><li>Related muckrAIkers <a href="https://kairos.fm/muckraikers/e018">episode</a> - Tech Bros Love AI Waifus</li></ul><p><strong>Bubble Talk</strong></p><ul><li>OfficeChai <a href="https://officechai.com/ai/openai-hasnt-completed-a-successful-full-scale-pretraining-run-since-gpt-4o-in-may-2024-says-semianalysis/">startup</a> - OpenAI Hasn’t Completed A Successful Full-Scale Pretraining Run Since GPT-4o In May 2024, Says SemiAnalysis</li><li>Vechron <a href="https://vechron.com/2025/12/anthropic-hires-wilson-sonsini-ipo-2026-openai-race/">report</a> - Anthropic Prepares for Potential 2026 IPO in Bid to Rival OpenAI</li><li>YCombinator Forum <a href="https://news.ycombinator.com/item?id=46218419">post</a> on AI crash</li><li>YCombinator Forum <a href="https://news.ycombinator.com/item?id=46250332">post</a> on OpenAI adopting Anthropic's "skills"</li><li>YCombinator Forum <a href="https://news.ycombinator.com/item?id=46126750">post</a> on OpenAI rumors</li><li>YCombinator Forum <a href="https://news.ycombinator.com/item?id=46182582">post</a> on OpenAI add suggestions</li></ul><p><strong>Other Sources</strong></p><ul><li>LinkedIn <a href="https://www.linkedin.com/feed/update/urn:li:activity:7405638898338537472/">post</a> discussing an agentic coding vibe shift</li><li><a href="https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/">Executive Order</a> - Ensuring a National Policy Framework for Artificial Intelligence</li><li>Inside Tech Law <a href="https://www.insidetechlaw.com/blog/2025/11/germany-delivers-landmark-copyright-ruling-against-openai-what-it-means-for-ai-and-ip">blogpost</a> - Germany delivers landmark copyright ruling against OpenAI: What it means for AI and IP</li><li>NeurIPS 2025 <a href="https://arxiv.org/abs/2509.26427">paper</a> - Ascent Fails to Forget</li><li>NBER <a href="https://www.nber.org/papers/w33777">working paper</a> - Large Language Models, Small Labor Market Effects</li><li>Dwarkesh Podcast <a href="https://www.dwarkesh.com/p/bits-per-sample">blogpost</a> - RL is even more information inefficient than you thought</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI criticism, Claude Code, AI capabilities, coding automation, vibe coding, AI bubble, AGI skepticism, rote tasks, AI externalities, tech criticism, agentic AI, agentic coding, AI ROI</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:person role="Editor" href="https://muckraikers.transistor.fm/people/chase-precopia">Chase Precopia</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/420d029d/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/420d029d/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Tech Bros Love AI Waifus</title>
      <itunes:episode>18</itunes:episode>
      <podcast:episode>18</podcast:episode>
      <itunes:title>Tech Bros Love AI Waifus</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">38891d0a-5077-42fa-b926-fd95aef7e092</guid>
      <link>https://kairos.fm/muckraikers/e018</link>
      <description>
        <![CDATA[<p>OpenAI is pivoting to porn while public sentiment turns decisively against AI. Pew Research shows Americans are now concerned over excited by a 2:1 margin. We trace how we got here: broken promises of cancer cures replaced by addiction mechanics and expensive APIs. Meanwhile, data centers are hiding a near-recession, straining power grids, and literally breaking your household appliances. Drawing parallels to the 1970s AI winter, we argue the bubble is shaking and needs to pop now, before it becomes another 2008. The good news? Grassroots resistance works. Protests have already blocked $64 billion in data center projects.</p><p>NOTE: The project that we cite for the $64 billion blockage is actually a pro-data-center campaign. The numbers still seem ok, but it's worth being aware of.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - - Introduction</li>
<li>(06:45) - - The Addiction Business Model</li>
<li>(10:15) - - Public Sentiment Data</li>
<li>(22:45) - - Data Centers and Infrastructure Problems</li>
<li>(36:30) - - The Bubble Discussion</li>
<li>(44:36) - - Closing Thoughts &amp; Outro</li>
</ul><br><strong>Links<br>Public Sentiment on AI</strong><ul><li>Pew Research <a href="https://www.pewresearch.org/global/2025/10/15/how-people-around-the-world-view-ai/">report</a> - How People Around the World View AI</li><li>Pew Research <a href="https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/">report</a> - How the U.S. Public and AI Experts View Artificial Intelligence</li><li>Pew Research <a href="https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/">report</a> - How Americans View AI and Its Impact on People and Society</li><li>University of Toronto <a href="https://srinstitute.utoronto.ca/public-opinion-ai">report</a> - Trust, attitudes and use of artificial intelligence: A global study 2025</li><li>Melbourne Business School <a href="https://mbs.edu/faculty-and-research/trust-and-ai/key-findings-on-public-attitudes-towards-ai">report</a> - Key findings on public attitudes towards AI</li><li>The Washington Post <a href="https://www.washingtonpost.com/technology/2025/10/07/ai-public-opinion-mistrust/">article</a> - Americans have become more pessimistic about AI. Why?</li><li>The New York Times <a href="https://www.nytimes.com/2025/10/20/technology/ai-data-center-backlash-mexico-ireland.html">article</a> - From Mexico to Ireland, Fury Mounts Over a Global A.I. Frenzy</li><li>The Guardian <a href="https://www.theguardian.com/lifeandstyle/2025/nov/10/chatgpt-dating-ick">article</a> - ‘It shows such a laziness’: why I refuse to date someone who uses ChatGPT</li><li>The Register <a href="https://www.theregister.com/2025/10/15/openais_chatgpt_popular_few_pay/">article</a> - OpenAI's ChatGPT is so popular that almost no one will pay for it</li></ul><p><strong>AI and Claims of Curing Cancer</strong></p><ul><li>Rachel Thomas, PhD <a href="https://rachel.fast.ai/posts/2024-02-20-ai-medicine/">blogpost</a> - “AI will cure cancer” misunderstands both AI and medicine</li><li>The Atlantic <a href="https://www.theatlantic.com/technology/2025/10/openai-chatgpt-atlas-web-browser/684662/">article</a> - OpenAI Wants to Cure Cancer. So Why Did It Make a Web Browser?</li><li>Independent <a href="https://www.independent.co.uk/tech/chatgpt-ai-cancer-cure-sam-altman-b2832612.html">article</a> - ChatGPT boss predicts when AI could cure cancer</li><li>The Atlantic <a href="https://www.theatlantic.com/technology/archive/2025/04/how-ai-will-actually-contribute-cancer-cure/682607/">article</a> - AI Executives Promise Cancer Cures. Here’s the Reality</li></ul><p><strong>AI Porn and the Addiction Economy</strong></p><ul><li>Forbes <a href="https://www.forbes.com/sites/tylerroush/2025/10/14/chatgpt-will-allow-erotica-after-easing-mental-health-restrictions-sam-altman-says/">article</a> - ChatGPT Will Allow ‘Erotica’ After Easing Mental Health Restrictions, Sam Altman Says</li><li>The Addiction Economy <a href="https://www.theaddictioneconomy.com/">website</a></li><li>PPC <a href="https://searchengineland.com/openai-staffing-chatgpt-ad-platform-462554">article</a> - OpenAI is staffing up to turn ChatGPT into an ad platform</li><li>Tom Nicholas <a href="https://www.youtube.com/watch?v=XuTQbOo3Y30">video</a> - Vape-o-nomics: Why Everything is Addictive Now</li></ul><p><strong>AI Bubble</strong></p><ul><li>Fast Company <a href="https://www.fastcompany.com/91435192/chatgpt-llm-openai-jobs-amazon">article</a> - AI isn’t replacing jobs. AI spending is</li><li>Pivot to AI <a href="https://pivot-to-ai.com/2025/09/28/the-finance-press-finally-starts-talking-about-the-ai-bubble/">article</a> - The finance press finally starts talking about the ‘AI bubble’</li><li>Fortune <a href="https://fortune.com/2025/10/07/data-centers-gdp-growth-zero-first-half-2025-jason-furman-harvard-economist/">article</a> - Without data centers, GDP growth was 0.1% in the first half of 2025, Harvard economist says</li><li>The Atlantic <a href="https://www.theatlantic.com/economy/archive/2025/09/ai-bubble-us-economy/684128/">article</a> - Just How Bad Would an AI Bubble Be?</li><li>The New York Times <a href="https://www.nytimes.com/2025/11/08/business/dealbook/debt-has-entered-the-ai-boom.html">article</a> - Debt Has Entered the A.I. Boom</li><li>Will Lockett's Newsletter <a href="https://www.planetearthandbeyond.co/p/ai-pullback-has-officially-started">article</a> - AI Pullback Has Officially Started</li><li>Reuters <a href="https://www.reuters.com/sustainability/sustainable-finance-reporting/michael-burry-big-short-fame-deregisters-scion-asset-management-2025-11-13/">article</a> - Michael Burry of 'Big Short' fame is closing his hedge fund</li><li>Business Insider <a href="https://www.businessinsider.com/jim-chanos-shorted-enron-warning-ai-boom-2025-8">article</a> - The guy who shorted Enron has a warning about the AI boom</li></ul><p><strong>Datacenters</strong></p><ul><li>Bloomberg <a href="https://www.bloomberg.com/graphics/2024-ai-power-home-appliances/">article</a> - AI Needs So Much Power, It’s Making Yours Worse</li><li>Data Center Watch <a href="https://www.datacenterwatch.org/report">report</a> - $64 billion of data center projects have been blocked or delayed amid local opposition</li><li>More Perfect Union <a href="https://m.youtube.com/watch?v=YN6BEUA4jNU&amp;pp=0gcJCR4Bo7VqN5tD">video</a> - We Found the Hidden Cost of Data Centers. It's in Your Electric Bill</li><li>DataCenter Knowledge <a href="https://www.datacenterknowledge.com/data-center-construction/why-communities-are-protesting-data-centers-and-how-the-industry-can-respond">article</a> - Why Communities Are Protesting Data Centers – And How the Industry Can Respond</li></ul><p><strong>Fighting Back</strong></p><ul><li>Knight First Amendment Institute <a href="https://knightcolumbia.org/content/ai-as-normal-technology">essay</a> - AI as Normal Technology</li><li>Pranksters vs. Autocrats <a href="https://www.jstor.org/stable/10.7591/j.ctv310vjt0.6?seq=1">chapter</a> - Laughtivism: The Secret Ingredient</li><li>SPSP <a href="https://spsp.org/news-center/character-context-blog/playing-power-humor-everyday-resistance">article</a> - Playing with Power: Humor as Everyday Resistance</li><li>Blood in the Machine <a href="https://www.bloodinthemachine.com/p/the-luddite-renaissance-is-in-full">article</a> - The Luddite Renaissance is in full swing</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>OpenAI is pivoting to porn while public sentiment turns decisively against AI. Pew Research shows Americans are now concerned over excited by a 2:1 margin. We trace how we got here: broken promises of cancer cures replaced by addiction mechanics and expensive APIs. Meanwhile, data centers are hiding a near-recession, straining power grids, and literally breaking your household appliances. Drawing parallels to the 1970s AI winter, we argue the bubble is shaking and needs to pop now, before it becomes another 2008. The good news? Grassroots resistance works. Protests have already blocked $64 billion in data center projects.</p><p>NOTE: The project that we cite for the $64 billion blockage is actually a pro-data-center campaign. The numbers still seem ok, but it's worth being aware of.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - - Introduction</li>
<li>(06:45) - - The Addiction Business Model</li>
<li>(10:15) - - Public Sentiment Data</li>
<li>(22:45) - - Data Centers and Infrastructure Problems</li>
<li>(36:30) - - The Bubble Discussion</li>
<li>(44:36) - - Closing Thoughts &amp; Outro</li>
</ul><br><strong>Links<br>Public Sentiment on AI</strong><ul><li>Pew Research <a href="https://www.pewresearch.org/global/2025/10/15/how-people-around-the-world-view-ai/">report</a> - How People Around the World View AI</li><li>Pew Research <a href="https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/">report</a> - How the U.S. Public and AI Experts View Artificial Intelligence</li><li>Pew Research <a href="https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/">report</a> - How Americans View AI and Its Impact on People and Society</li><li>University of Toronto <a href="https://srinstitute.utoronto.ca/public-opinion-ai">report</a> - Trust, attitudes and use of artificial intelligence: A global study 2025</li><li>Melbourne Business School <a href="https://mbs.edu/faculty-and-research/trust-and-ai/key-findings-on-public-attitudes-towards-ai">report</a> - Key findings on public attitudes towards AI</li><li>The Washington Post <a href="https://www.washingtonpost.com/technology/2025/10/07/ai-public-opinion-mistrust/">article</a> - Americans have become more pessimistic about AI. Why?</li><li>The New York Times <a href="https://www.nytimes.com/2025/10/20/technology/ai-data-center-backlash-mexico-ireland.html">article</a> - From Mexico to Ireland, Fury Mounts Over a Global A.I. Frenzy</li><li>The Guardian <a href="https://www.theguardian.com/lifeandstyle/2025/nov/10/chatgpt-dating-ick">article</a> - ‘It shows such a laziness’: why I refuse to date someone who uses ChatGPT</li><li>The Register <a href="https://www.theregister.com/2025/10/15/openais_chatgpt_popular_few_pay/">article</a> - OpenAI's ChatGPT is so popular that almost no one will pay for it</li></ul><p><strong>AI and Claims of Curing Cancer</strong></p><ul><li>Rachel Thomas, PhD <a href="https://rachel.fast.ai/posts/2024-02-20-ai-medicine/">blogpost</a> - “AI will cure cancer” misunderstands both AI and medicine</li><li>The Atlantic <a href="https://www.theatlantic.com/technology/2025/10/openai-chatgpt-atlas-web-browser/684662/">article</a> - OpenAI Wants to Cure Cancer. So Why Did It Make a Web Browser?</li><li>Independent <a href="https://www.independent.co.uk/tech/chatgpt-ai-cancer-cure-sam-altman-b2832612.html">article</a> - ChatGPT boss predicts when AI could cure cancer</li><li>The Atlantic <a href="https://www.theatlantic.com/technology/archive/2025/04/how-ai-will-actually-contribute-cancer-cure/682607/">article</a> - AI Executives Promise Cancer Cures. Here’s the Reality</li></ul><p><strong>AI Porn and the Addiction Economy</strong></p><ul><li>Forbes <a href="https://www.forbes.com/sites/tylerroush/2025/10/14/chatgpt-will-allow-erotica-after-easing-mental-health-restrictions-sam-altman-says/">article</a> - ChatGPT Will Allow ‘Erotica’ After Easing Mental Health Restrictions, Sam Altman Says</li><li>The Addiction Economy <a href="https://www.theaddictioneconomy.com/">website</a></li><li>PPC <a href="https://searchengineland.com/openai-staffing-chatgpt-ad-platform-462554">article</a> - OpenAI is staffing up to turn ChatGPT into an ad platform</li><li>Tom Nicholas <a href="https://www.youtube.com/watch?v=XuTQbOo3Y30">video</a> - Vape-o-nomics: Why Everything is Addictive Now</li></ul><p><strong>AI Bubble</strong></p><ul><li>Fast Company <a href="https://www.fastcompany.com/91435192/chatgpt-llm-openai-jobs-amazon">article</a> - AI isn’t replacing jobs. AI spending is</li><li>Pivot to AI <a href="https://pivot-to-ai.com/2025/09/28/the-finance-press-finally-starts-talking-about-the-ai-bubble/">article</a> - The finance press finally starts talking about the ‘AI bubble’</li><li>Fortune <a href="https://fortune.com/2025/10/07/data-centers-gdp-growth-zero-first-half-2025-jason-furman-harvard-economist/">article</a> - Without data centers, GDP growth was 0.1% in the first half of 2025, Harvard economist says</li><li>The Atlantic <a href="https://www.theatlantic.com/economy/archive/2025/09/ai-bubble-us-economy/684128/">article</a> - Just How Bad Would an AI Bubble Be?</li><li>The New York Times <a href="https://www.nytimes.com/2025/11/08/business/dealbook/debt-has-entered-the-ai-boom.html">article</a> - Debt Has Entered the A.I. Boom</li><li>Will Lockett's Newsletter <a href="https://www.planetearthandbeyond.co/p/ai-pullback-has-officially-started">article</a> - AI Pullback Has Officially Started</li><li>Reuters <a href="https://www.reuters.com/sustainability/sustainable-finance-reporting/michael-burry-big-short-fame-deregisters-scion-asset-management-2025-11-13/">article</a> - Michael Burry of 'Big Short' fame is closing his hedge fund</li><li>Business Insider <a href="https://www.businessinsider.com/jim-chanos-shorted-enron-warning-ai-boom-2025-8">article</a> - The guy who shorted Enron has a warning about the AI boom</li></ul><p><strong>Datacenters</strong></p><ul><li>Bloomberg <a href="https://www.bloomberg.com/graphics/2024-ai-power-home-appliances/">article</a> - AI Needs So Much Power, It’s Making Yours Worse</li><li>Data Center Watch <a href="https://www.datacenterwatch.org/report">report</a> - $64 billion of data center projects have been blocked or delayed amid local opposition</li><li>More Perfect Union <a href="https://m.youtube.com/watch?v=YN6BEUA4jNU&amp;pp=0gcJCR4Bo7VqN5tD">video</a> - We Found the Hidden Cost of Data Centers. It's in Your Electric Bill</li><li>DataCenter Knowledge <a href="https://www.datacenterknowledge.com/data-center-construction/why-communities-are-protesting-data-centers-and-how-the-industry-can-respond">article</a> - Why Communities Are Protesting Data Centers – And How the Industry Can Respond</li></ul><p><strong>Fighting Back</strong></p><ul><li>Knight First Amendment Institute <a href="https://knightcolumbia.org/content/ai-as-normal-technology">essay</a> - AI as Normal Technology</li><li>Pranksters vs. Autocrats <a href="https://www.jstor.org/stable/10.7591/j.ctv310vjt0.6?seq=1">chapter</a> - Laughtivism: The Secret Ingredient</li><li>SPSP <a href="https://spsp.org/news-center/character-context-blog/playing-power-humor-everyday-resistance">article</a> - Playing with Power: Humor as Everyday Resistance</li><li>Blood in the Machine <a href="https://www.bloodinthemachine.com/p/the-luddite-renaissance-is-in-full">article</a> - The Luddite Renaissance is in full swing</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 15 Dec 2025 09:00:00 -0700</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/e54d3007/0cbe188c.mp3" length="43852589" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>2736</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>OpenAI is pivoting to porn while public sentiment turns decisively against AI. Pew Research shows Americans are now concerned over excited by a 2:1 margin. We trace how we got here: broken promises of cancer cures replaced by addiction mechanics and expensive APIs. Meanwhile, data centers are hiding a near-recession, straining power grids, and literally breaking your household appliances. Drawing parallels to the 1970s AI winter, we argue the bubble is shaking and needs to pop now, before it becomes another 2008. The good news? Grassroots resistance works. Protests have already blocked $64 billion in data center projects.</p><p>NOTE: The project that we cite for the $64 billion blockage is actually a pro-data-center campaign. The numbers still seem ok, but it's worth being aware of.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - - Introduction</li>
<li>(06:45) - - The Addiction Business Model</li>
<li>(10:15) - - Public Sentiment Data</li>
<li>(22:45) - - Data Centers and Infrastructure Problems</li>
<li>(36:30) - - The Bubble Discussion</li>
<li>(44:36) - - Closing Thoughts &amp; Outro</li>
</ul><br><strong>Links<br>Public Sentiment on AI</strong><ul><li>Pew Research <a href="https://www.pewresearch.org/global/2025/10/15/how-people-around-the-world-view-ai/">report</a> - How People Around the World View AI</li><li>Pew Research <a href="https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/">report</a> - How the U.S. Public and AI Experts View Artificial Intelligence</li><li>Pew Research <a href="https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/">report</a> - How Americans View AI and Its Impact on People and Society</li><li>University of Toronto <a href="https://srinstitute.utoronto.ca/public-opinion-ai">report</a> - Trust, attitudes and use of artificial intelligence: A global study 2025</li><li>Melbourne Business School <a href="https://mbs.edu/faculty-and-research/trust-and-ai/key-findings-on-public-attitudes-towards-ai">report</a> - Key findings on public attitudes towards AI</li><li>The Washington Post <a href="https://www.washingtonpost.com/technology/2025/10/07/ai-public-opinion-mistrust/">article</a> - Americans have become more pessimistic about AI. Why?</li><li>The New York Times <a href="https://www.nytimes.com/2025/10/20/technology/ai-data-center-backlash-mexico-ireland.html">article</a> - From Mexico to Ireland, Fury Mounts Over a Global A.I. Frenzy</li><li>The Guardian <a href="https://www.theguardian.com/lifeandstyle/2025/nov/10/chatgpt-dating-ick">article</a> - ‘It shows such a laziness’: why I refuse to date someone who uses ChatGPT</li><li>The Register <a href="https://www.theregister.com/2025/10/15/openais_chatgpt_popular_few_pay/">article</a> - OpenAI's ChatGPT is so popular that almost no one will pay for it</li></ul><p><strong>AI and Claims of Curing Cancer</strong></p><ul><li>Rachel Thomas, PhD <a href="https://rachel.fast.ai/posts/2024-02-20-ai-medicine/">blogpost</a> - “AI will cure cancer” misunderstands both AI and medicine</li><li>The Atlantic <a href="https://www.theatlantic.com/technology/2025/10/openai-chatgpt-atlas-web-browser/684662/">article</a> - OpenAI Wants to Cure Cancer. So Why Did It Make a Web Browser?</li><li>Independent <a href="https://www.independent.co.uk/tech/chatgpt-ai-cancer-cure-sam-altman-b2832612.html">article</a> - ChatGPT boss predicts when AI could cure cancer</li><li>The Atlantic <a href="https://www.theatlantic.com/technology/archive/2025/04/how-ai-will-actually-contribute-cancer-cure/682607/">article</a> - AI Executives Promise Cancer Cures. Here’s the Reality</li></ul><p><strong>AI Porn and the Addiction Economy</strong></p><ul><li>Forbes <a href="https://www.forbes.com/sites/tylerroush/2025/10/14/chatgpt-will-allow-erotica-after-easing-mental-health-restrictions-sam-altman-says/">article</a> - ChatGPT Will Allow ‘Erotica’ After Easing Mental Health Restrictions, Sam Altman Says</li><li>The Addiction Economy <a href="https://www.theaddictioneconomy.com/">website</a></li><li>PPC <a href="https://searchengineland.com/openai-staffing-chatgpt-ad-platform-462554">article</a> - OpenAI is staffing up to turn ChatGPT into an ad platform</li><li>Tom Nicholas <a href="https://www.youtube.com/watch?v=XuTQbOo3Y30">video</a> - Vape-o-nomics: Why Everything is Addictive Now</li></ul><p><strong>AI Bubble</strong></p><ul><li>Fast Company <a href="https://www.fastcompany.com/91435192/chatgpt-llm-openai-jobs-amazon">article</a> - AI isn’t replacing jobs. AI spending is</li><li>Pivot to AI <a href="https://pivot-to-ai.com/2025/09/28/the-finance-press-finally-starts-talking-about-the-ai-bubble/">article</a> - The finance press finally starts talking about the ‘AI bubble’</li><li>Fortune <a href="https://fortune.com/2025/10/07/data-centers-gdp-growth-zero-first-half-2025-jason-furman-harvard-economist/">article</a> - Without data centers, GDP growth was 0.1% in the first half of 2025, Harvard economist says</li><li>The Atlantic <a href="https://www.theatlantic.com/economy/archive/2025/09/ai-bubble-us-economy/684128/">article</a> - Just How Bad Would an AI Bubble Be?</li><li>The New York Times <a href="https://www.nytimes.com/2025/11/08/business/dealbook/debt-has-entered-the-ai-boom.html">article</a> - Debt Has Entered the A.I. Boom</li><li>Will Lockett's Newsletter <a href="https://www.planetearthandbeyond.co/p/ai-pullback-has-officially-started">article</a> - AI Pullback Has Officially Started</li><li>Reuters <a href="https://www.reuters.com/sustainability/sustainable-finance-reporting/michael-burry-big-short-fame-deregisters-scion-asset-management-2025-11-13/">article</a> - Michael Burry of 'Big Short' fame is closing his hedge fund</li><li>Business Insider <a href="https://www.businessinsider.com/jim-chanos-shorted-enron-warning-ai-boom-2025-8">article</a> - The guy who shorted Enron has a warning about the AI boom</li></ul><p><strong>Datacenters</strong></p><ul><li>Bloomberg <a href="https://www.bloomberg.com/graphics/2024-ai-power-home-appliances/">article</a> - AI Needs So Much Power, It’s Making Yours Worse</li><li>Data Center Watch <a href="https://www.datacenterwatch.org/report">report</a> - $64 billion of data center projects have been blocked or delayed amid local opposition</li><li>More Perfect Union <a href="https://m.youtube.com/watch?v=YN6BEUA4jNU&amp;pp=0gcJCR4Bo7VqN5tD">video</a> - We Found the Hidden Cost of Data Centers. It's in Your Electric Bill</li><li>DataCenter Knowledge <a href="https://www.datacenterknowledge.com/data-center-construction/why-communities-are-protesting-data-centers-and-how-the-industry-can-respond">article</a> - Why Communities Are Protesting Data Centers – And How the Industry Can Respond</li></ul><p><strong>Fighting Back</strong></p><ul><li>Knight First Amendment Institute <a href="https://knightcolumbia.org/content/ai-as-normal-technology">essay</a> - AI as Normal Technology</li><li>Pranksters vs. Autocrats <a href="https://www.jstor.org/stable/10.7591/j.ctv310vjt0.6?seq=1">chapter</a> - Laughtivism: The Secret Ingredient</li><li>SPSP <a href="https://spsp.org/news-center/character-context-blog/playing-power-humor-everyday-resistance">article</a> - Playing with Power: Humor as Everyday Resistance</li><li>Blood in the Machine <a href="https://www.bloodinthemachine.com/p/the-luddite-renaissance-is-in-full">article</a> - The Luddite Renaissance is in full swing</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, tech bros, AI porn, AI waifus, public sentiment, artificial intelligence, AI bubble, bubble, AI hate, AI vibe shift, vibe shift</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:person role="Editor" href="https://muckraikers.transistor.fm/people/chase-precopia">Chase Precopia</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/e54d3007/transcript.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/e54d3007/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/e54d3007/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>AI Safety for Who?</title>
      <itunes:episode>17</itunes:episode>
      <podcast:episode>17</podcast:episode>
      <itunes:title>AI Safety for Who?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ab39cd39-f8ec-4cd2-b554-e1dd10bc12d7</guid>
      <link>https://kairos.fm/muckraikers/e017</link>
      <description>
        <![CDATA[<p>Jacob and Igor argue that AI safety is hurting users, not helping them. The techniques used to make chatbots "safe" and "aligned," such as instruction tuning and RLHF, anthropomorphize AI systems such they take advantage of our instincts as social beings. At the same time, Big Tech companies push these systems for "wellness" while dodging healthcare liability, causing real harms <em>today</em> We discuss what actual safety would look like, drawing on self-driving car regulations.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Introduction &amp; AI Investment Insanity</li>
<li>(01:43) - The Problem with AI Safety</li>
<li>(08:16) - Anthropomorphizing AI &amp; Its Dangers</li>
<li>(26:55) - Mental Health, Wellness, and AI</li>
<li>(39:15) - Censorship, Bias, and Dual Use</li>
<li>(44:42) - Solutions, Community Action &amp; Final Thoughts</li>
</ul><br><strong>Links</strong><p><strong>AI Ethics &amp; Philosophy</strong></p><ul><li>Foreign affairs <a href="https://archive.ph/lbek5">article</a> - The Cost of the AGI Delusion</li><li>Nature <a href="https://www.nature.com/articles/s42256-019-0114-4">article</a> - Principles alone cannot guarantee ethical AI</li><li>Xeiaso <a href="https://xeiaso.net/blog/2025/who-assistant-serve/">blog post</a> - Who Do Assistants Serve?</li><li>Argmin <a href="https://www.argmin.net/p/the-banal-evil-of-ai-safety">article</a> - The Banal Evil of AI Safety</li><li>AI Panic News <a href="https://www.aipanic.news/p/the-rationality-trap">article</a> - The Rationality Trap</li></ul><p><strong>AI Model Bias, Failures, and Impacts</strong></p><ul><li>BBC <a href="https://www.bbc.com/news/technology-33347866">news article</a> - AI Image Generation Issues</li><li>The New York Times <a href="https://www.nytimes.com/2024/02/22/technology/google-gemini-german-uniforms.html">article</a> - Google Gemini German Uniforms Controversy</li><li>The Verge <a href="https://www.theverge.com/2024/2/23/24081309/google-gemini-embarrassing-ai-pictures-diverse-nazi">article</a> - Google Gemini's Embarrassing AI Pictures</li><li>NPR <a href="https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content">article</a> - Grok, Elon Musk, and Antisemitic/Racist Content</li><li>AccelerAId <a href="https://acceleraid.ai/en/about/blog/how-ai-nudges-are-transforming-up-and-cross-selling-2/">blog post</a> - How AI Nudges are Transforming Up-and Cross-Selling</li><li>AI Took My Job <a href="https://www.aitookmyjob.io/">website</a></li></ul><p><strong>AI Mental Health &amp; Safety Concerns</strong></p><ul><li>Euronews <a href="https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-">article</a> - AI Chatbot Tragedy</li><li>Popular Mechanics <a href="https://www.popularmechanics.com/technology/robots/a65781776/openai-psychosis/">article</a> - OpenAI and Psychosis</li><li>Psychology Today <a href="https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis">article</a> - The Emerging Problem of AI Psychosis</li><li>Rolling Stone <a href="https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/">article</a> - AI Spiritual Delusions Destroying Human Relationships</li><li>The New York Times <a href="https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html">article</a> - AI Chatbots and Delusions</li></ul><p><strong>Guidelines, Governance, and Censorship</strong></p><ul><li><a href="https://arxiv.org/abs/2505.12625">Preprint</a> - R1dacted: Investigating Local Censorship in DeepSeek's R1 Language Model</li><li>Minds &amp; Machines <a href="https://link.springer.com/article/10.1007/S11023-020-09517-8">article</a> - The Ethics of AI Ethics: An Evaluation of Guidelines</li><li>SSRN <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5283275">paper</a> - Instrument Choice in AI Governance</li><li>Anthropic <a href="https://www.anthropic.com/news/claude-gov-models-for-u-s-national-security-customers">announcement</a> - Claude Gov Models for U.S. National Security Customers</li><li>Anthropic <a href="https://www.anthropic.com/news/claudes-constitution">documentation</a> - Claude's Constitution</li><li>Reuters <a href="https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/">investigation</a> - Meta AI Chatbot Guidelines</li><li>Swiss Federal Council <a href="https://www.fedlex.admin.ch/de/consultation-procedures?news_period=last_day&amp;news_pageNb=1&amp;news_order=desc&amp;news_itemsPerPage=10">consultation</a> - Swiss AI Consultation Procedures</li><li>Grok Prompts Github <a href="https://x.com/xai/status/1923183622422458851">Repo</a></li><li>Simon Willison <a href="https://simonwillison.net/2025/Jul/12/grok-4-heavy/">blog post</a> - Grok 4 Heavy</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Jacob and Igor argue that AI safety is hurting users, not helping them. The techniques used to make chatbots "safe" and "aligned," such as instruction tuning and RLHF, anthropomorphize AI systems such they take advantage of our instincts as social beings. At the same time, Big Tech companies push these systems for "wellness" while dodging healthcare liability, causing real harms <em>today</em> We discuss what actual safety would look like, drawing on self-driving car regulations.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Introduction &amp; AI Investment Insanity</li>
<li>(01:43) - The Problem with AI Safety</li>
<li>(08:16) - Anthropomorphizing AI &amp; Its Dangers</li>
<li>(26:55) - Mental Health, Wellness, and AI</li>
<li>(39:15) - Censorship, Bias, and Dual Use</li>
<li>(44:42) - Solutions, Community Action &amp; Final Thoughts</li>
</ul><br><strong>Links</strong><p><strong>AI Ethics &amp; Philosophy</strong></p><ul><li>Foreign affairs <a href="https://archive.ph/lbek5">article</a> - The Cost of the AGI Delusion</li><li>Nature <a href="https://www.nature.com/articles/s42256-019-0114-4">article</a> - Principles alone cannot guarantee ethical AI</li><li>Xeiaso <a href="https://xeiaso.net/blog/2025/who-assistant-serve/">blog post</a> - Who Do Assistants Serve?</li><li>Argmin <a href="https://www.argmin.net/p/the-banal-evil-of-ai-safety">article</a> - The Banal Evil of AI Safety</li><li>AI Panic News <a href="https://www.aipanic.news/p/the-rationality-trap">article</a> - The Rationality Trap</li></ul><p><strong>AI Model Bias, Failures, and Impacts</strong></p><ul><li>BBC <a href="https://www.bbc.com/news/technology-33347866">news article</a> - AI Image Generation Issues</li><li>The New York Times <a href="https://www.nytimes.com/2024/02/22/technology/google-gemini-german-uniforms.html">article</a> - Google Gemini German Uniforms Controversy</li><li>The Verge <a href="https://www.theverge.com/2024/2/23/24081309/google-gemini-embarrassing-ai-pictures-diverse-nazi">article</a> - Google Gemini's Embarrassing AI Pictures</li><li>NPR <a href="https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content">article</a> - Grok, Elon Musk, and Antisemitic/Racist Content</li><li>AccelerAId <a href="https://acceleraid.ai/en/about/blog/how-ai-nudges-are-transforming-up-and-cross-selling-2/">blog post</a> - How AI Nudges are Transforming Up-and Cross-Selling</li><li>AI Took My Job <a href="https://www.aitookmyjob.io/">website</a></li></ul><p><strong>AI Mental Health &amp; Safety Concerns</strong></p><ul><li>Euronews <a href="https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-">article</a> - AI Chatbot Tragedy</li><li>Popular Mechanics <a href="https://www.popularmechanics.com/technology/robots/a65781776/openai-psychosis/">article</a> - OpenAI and Psychosis</li><li>Psychology Today <a href="https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis">article</a> - The Emerging Problem of AI Psychosis</li><li>Rolling Stone <a href="https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/">article</a> - AI Spiritual Delusions Destroying Human Relationships</li><li>The New York Times <a href="https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html">article</a> - AI Chatbots and Delusions</li></ul><p><strong>Guidelines, Governance, and Censorship</strong></p><ul><li><a href="https://arxiv.org/abs/2505.12625">Preprint</a> - R1dacted: Investigating Local Censorship in DeepSeek's R1 Language Model</li><li>Minds &amp; Machines <a href="https://link.springer.com/article/10.1007/S11023-020-09517-8">article</a> - The Ethics of AI Ethics: An Evaluation of Guidelines</li><li>SSRN <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5283275">paper</a> - Instrument Choice in AI Governance</li><li>Anthropic <a href="https://www.anthropic.com/news/claude-gov-models-for-u-s-national-security-customers">announcement</a> - Claude Gov Models for U.S. National Security Customers</li><li>Anthropic <a href="https://www.anthropic.com/news/claudes-constitution">documentation</a> - Claude's Constitution</li><li>Reuters <a href="https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/">investigation</a> - Meta AI Chatbot Guidelines</li><li>Swiss Federal Council <a href="https://www.fedlex.admin.ch/de/consultation-procedures?news_period=last_day&amp;news_pageNb=1&amp;news_order=desc&amp;news_itemsPerPage=10">consultation</a> - Swiss AI Consultation Procedures</li><li>Grok Prompts Github <a href="https://x.com/xai/status/1923183622422458851">Repo</a></li><li>Simon Willison <a href="https://simonwillison.net/2025/Jul/12/grok-4-heavy/">blog post</a> - Grok 4 Heavy</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 13 Oct 2025 09:00:00 -0600</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/222dc32a/b1cda056.mp3" length="47809922" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/0G9MPxyZYwQ7wAZxEIqy68ixWLmn67P4dAJjgxnoe1k/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNTVi/MTJmZDNmY2FiMGM0/MTE0MmY0MjU0ZDFk/OTAxOS5qcGc.jpg"/>
      <itunes:duration>2983</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Jacob and Igor argue that AI safety is hurting users, not helping them. The techniques used to make chatbots "safe" and "aligned," such as instruction tuning and RLHF, anthropomorphize AI systems such they take advantage of our instincts as social beings. At the same time, Big Tech companies push these systems for "wellness" while dodging healthcare liability, causing real harms <em>today</em> We discuss what actual safety would look like, drawing on self-driving car regulations.</p><p><strong>Chapters</strong><br></p><ul><li>(00:00) - Introduction &amp; AI Investment Insanity</li>
<li>(01:43) - The Problem with AI Safety</li>
<li>(08:16) - Anthropomorphizing AI &amp; Its Dangers</li>
<li>(26:55) - Mental Health, Wellness, and AI</li>
<li>(39:15) - Censorship, Bias, and Dual Use</li>
<li>(44:42) - Solutions, Community Action &amp; Final Thoughts</li>
</ul><br><strong>Links</strong><p><strong>AI Ethics &amp; Philosophy</strong></p><ul><li>Foreign affairs <a href="https://archive.ph/lbek5">article</a> - The Cost of the AGI Delusion</li><li>Nature <a href="https://www.nature.com/articles/s42256-019-0114-4">article</a> - Principles alone cannot guarantee ethical AI</li><li>Xeiaso <a href="https://xeiaso.net/blog/2025/who-assistant-serve/">blog post</a> - Who Do Assistants Serve?</li><li>Argmin <a href="https://www.argmin.net/p/the-banal-evil-of-ai-safety">article</a> - The Banal Evil of AI Safety</li><li>AI Panic News <a href="https://www.aipanic.news/p/the-rationality-trap">article</a> - The Rationality Trap</li></ul><p><strong>AI Model Bias, Failures, and Impacts</strong></p><ul><li>BBC <a href="https://www.bbc.com/news/technology-33347866">news article</a> - AI Image Generation Issues</li><li>The New York Times <a href="https://www.nytimes.com/2024/02/22/technology/google-gemini-german-uniforms.html">article</a> - Google Gemini German Uniforms Controversy</li><li>The Verge <a href="https://www.theverge.com/2024/2/23/24081309/google-gemini-embarrassing-ai-pictures-diverse-nazi">article</a> - Google Gemini's Embarrassing AI Pictures</li><li>NPR <a href="https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content">article</a> - Grok, Elon Musk, and Antisemitic/Racist Content</li><li>AccelerAId <a href="https://acceleraid.ai/en/about/blog/how-ai-nudges-are-transforming-up-and-cross-selling-2/">blog post</a> - How AI Nudges are Transforming Up-and Cross-Selling</li><li>AI Took My Job <a href="https://www.aitookmyjob.io/">website</a></li></ul><p><strong>AI Mental Health &amp; Safety Concerns</strong></p><ul><li>Euronews <a href="https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-">article</a> - AI Chatbot Tragedy</li><li>Popular Mechanics <a href="https://www.popularmechanics.com/technology/robots/a65781776/openai-psychosis/">article</a> - OpenAI and Psychosis</li><li>Psychology Today <a href="https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis">article</a> - The Emerging Problem of AI Psychosis</li><li>Rolling Stone <a href="https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/">article</a> - AI Spiritual Delusions Destroying Human Relationships</li><li>The New York Times <a href="https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html">article</a> - AI Chatbots and Delusions</li></ul><p><strong>Guidelines, Governance, and Censorship</strong></p><ul><li><a href="https://arxiv.org/abs/2505.12625">Preprint</a> - R1dacted: Investigating Local Censorship in DeepSeek's R1 Language Model</li><li>Minds &amp; Machines <a href="https://link.springer.com/article/10.1007/S11023-020-09517-8">article</a> - The Ethics of AI Ethics: An Evaluation of Guidelines</li><li>SSRN <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5283275">paper</a> - Instrument Choice in AI Governance</li><li>Anthropic <a href="https://www.anthropic.com/news/claude-gov-models-for-u-s-national-security-customers">announcement</a> - Claude Gov Models for U.S. National Security Customers</li><li>Anthropic <a href="https://www.anthropic.com/news/claudes-constitution">documentation</a> - Claude's Constitution</li><li>Reuters <a href="https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/">investigation</a> - Meta AI Chatbot Guidelines</li><li>Swiss Federal Council <a href="https://www.fedlex.admin.ch/de/consultation-procedures?news_period=last_day&amp;news_pageNb=1&amp;news_order=desc&amp;news_itemsPerPage=10">consultation</a> - Swiss AI Consultation Procedures</li><li>Grok Prompts Github <a href="https://x.com/xai/status/1923183622422458851">Repo</a></li><li>Simon Willison <a href="https://simonwillison.net/2025/Jul/12/grok-4-heavy/">blog post</a> - Grok 4 Heavy</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI safety, AI, artificial intelligence, AI ethics, AI anthropomorphization, AI hype, AI harms, AI alignment, AI psychosis, AI and mental health, AI and censorship, AI control, AGI</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Editor" href="https://muckraikers.transistor.fm/people/chase-precopia">Chase Precopia</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/222dc32a/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/222dc32a/transcript.json" type="application/json"/>
      <podcast:transcript url="https://share.transistor.fm/s/222dc32a/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/222dc32a/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>The Co-opting of Safety</title>
      <itunes:episode>16</itunes:episode>
      <podcast:episode>16</podcast:episode>
      <itunes:title>The Co-opting of Safety</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d37c72ad-9cac-4c75-befa-8a500137ed97</guid>
      <link>https://kairos.fm/muckraikers/e016</link>
      <description>
        <![CDATA[<p>We dig into how the concept of AI "safety" has been co-opted and weaponized by tech companies. Starting with examples like Mecha-Hitler Grok, we explore how real safety engineering differs from AI "alignment," the myth of the alignment tax, and why this semantic confusion matters for actual safety.</p><p></p><ul><li>(00:00) - Intro</li>
<li>(00:21) - Mecha-Hitler Grok</li>
<li>(10:07) - "Safety"</li>
<li>(19:40) - Under-specification</li>
<li>(53:56) - This time isn't different</li>
<li>(01:01:46) - Alignment Tax myth</li>
<li>(01:17:37) - Actually making AI safer</li>
</ul><strong><br>Links</strong><ul><li>JMLR <a href="https://www.jmlr.org/papers/v23/20-1335.html">article</a> - Underspecification Presents Challenges for Credibility in Modern Machine Learning</li><li>Trail of Bits <a href="https://github.com/trailofbits/publications/blob/master/papers/toward_comprehensive_risk_assessments.pdf">paper</a> - Towards Comprehensive Risk Assessments and Assurance of AI-Based Systems</li><li>SSRN <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4924942">paper</a> - Uniqueness Bias: Why It Matters, How to Curb It</li></ul><p><strong>Additional Referenced Papers</strong></p><ul><li>NeurIPS <a href="https://www.safetywashing.ai/">paper</a> - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?</li><li>ICML <a href="https://arxiv.org/abs/2312.06942">paper</a> - AI Control: Improving Safety Despite Intentional Subversion</li><li>ICML <a href="https://darkbench.ai/">paper</a> - DarkBench: Benchmarking Dark Patterns in Large Language Models</li><li>OSF <a href="https://osf.io/preprints/osf/ygx5q_v1">preprint</a> - Current Real-World Use of Large Language Models for Mental Health</li><li>Anthropic <a href="https://arxiv.org/pdf/2204.05862">preprint</a> - Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback</li></ul><p><strong>Inciting Examples</strong></p><ul><li>ars Technica <a href="https://arstechnica.com/tech-policy/2025/08/us-government-agency-drops-grok-after-mechahitler-backlash-report-says/">article</a> - US government agency drops Grok after MechaHitler backlash, report says</li><li>The Guardian <a href="https://www.theguardian.com/technology/2025/may/14/elon-musk-grok-white-genocide">article</a> - Musk’s AI Grok bot rants about ‘white genocide’ in South Africa in unrelated chats</li><li>BBC <a href="https://www.bbc.com/news/articles/cn4jnwdvg9qo">article</a> - Update that made ChatGPT 'dangerously' sycophantic pulled</li></ul><p><strong>Other Sources</strong></p><ul><li>London Daily <a href="https://londondaily.com/uk-ai-safety-institute-rebrands-as-ai-security-institute-to-focus-on-crime-and-national-security">article</a> - UK AI Safety Institute Rebrands as AI Security Institute to Focus on Crime and National Security</li><li>Vice <a href="https://www.vice.com/en/article/prominent-ai-philosopher-and-father-of-longtermism-sent-very-racist-email-to-a-90s-philosophy-listserv/">article</a> - Prominent AI Philosopher and ‘Father’ of Longtermism Sent Very Racist Email to a 90s Philosophy Listserv</li><li>LessWrong <a href="https://www.lesswrong.com/posts/jMzBhCRrr7otmqcvK/notkilleveryoneism-sounds-dumb">blogpost</a> - "notkilleveryoneism" sounds dumb (see comments)</li><li>EA Forum <a href="https://forum.effectivealtruism.org/posts/XdhwXppfqrpPL2YDX/an-overview-of-the-ai-safety-funding-situation">blogpost</a> - An Overview of the AI Safety Funding Situation</li><li><a href="https://link.springer.com/book/10.1007/978-3-319-24301-6">Book</a> by Dmitry Chernov and Didier Sornette - Man-made Catastrophes and Risk Information Concealment</li><li>Euronews <a href="https://www.euronews.com/next/2025/08/05/openai-adds-mental-health-safeguards-to-chatgpt-saying-chatbot-has-fed-into-users-delusion">article</a> - OpenAI adds mental health safeguards to ChatGPT, saying chatbot has fed into users’ ‘delusions’</li><li>Pleias <a href="https://pleias.fr/">website</a></li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/Jaywalking">page</a> on Jaywalking</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>We dig into how the concept of AI "safety" has been co-opted and weaponized by tech companies. Starting with examples like Mecha-Hitler Grok, we explore how real safety engineering differs from AI "alignment," the myth of the alignment tax, and why this semantic confusion matters for actual safety.</p><p></p><ul><li>(00:00) - Intro</li>
<li>(00:21) - Mecha-Hitler Grok</li>
<li>(10:07) - "Safety"</li>
<li>(19:40) - Under-specification</li>
<li>(53:56) - This time isn't different</li>
<li>(01:01:46) - Alignment Tax myth</li>
<li>(01:17:37) - Actually making AI safer</li>
</ul><strong><br>Links</strong><ul><li>JMLR <a href="https://www.jmlr.org/papers/v23/20-1335.html">article</a> - Underspecification Presents Challenges for Credibility in Modern Machine Learning</li><li>Trail of Bits <a href="https://github.com/trailofbits/publications/blob/master/papers/toward_comprehensive_risk_assessments.pdf">paper</a> - Towards Comprehensive Risk Assessments and Assurance of AI-Based Systems</li><li>SSRN <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4924942">paper</a> - Uniqueness Bias: Why It Matters, How to Curb It</li></ul><p><strong>Additional Referenced Papers</strong></p><ul><li>NeurIPS <a href="https://www.safetywashing.ai/">paper</a> - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?</li><li>ICML <a href="https://arxiv.org/abs/2312.06942">paper</a> - AI Control: Improving Safety Despite Intentional Subversion</li><li>ICML <a href="https://darkbench.ai/">paper</a> - DarkBench: Benchmarking Dark Patterns in Large Language Models</li><li>OSF <a href="https://osf.io/preprints/osf/ygx5q_v1">preprint</a> - Current Real-World Use of Large Language Models for Mental Health</li><li>Anthropic <a href="https://arxiv.org/pdf/2204.05862">preprint</a> - Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback</li></ul><p><strong>Inciting Examples</strong></p><ul><li>ars Technica <a href="https://arstechnica.com/tech-policy/2025/08/us-government-agency-drops-grok-after-mechahitler-backlash-report-says/">article</a> - US government agency drops Grok after MechaHitler backlash, report says</li><li>The Guardian <a href="https://www.theguardian.com/technology/2025/may/14/elon-musk-grok-white-genocide">article</a> - Musk’s AI Grok bot rants about ‘white genocide’ in South Africa in unrelated chats</li><li>BBC <a href="https://www.bbc.com/news/articles/cn4jnwdvg9qo">article</a> - Update that made ChatGPT 'dangerously' sycophantic pulled</li></ul><p><strong>Other Sources</strong></p><ul><li>London Daily <a href="https://londondaily.com/uk-ai-safety-institute-rebrands-as-ai-security-institute-to-focus-on-crime-and-national-security">article</a> - UK AI Safety Institute Rebrands as AI Security Institute to Focus on Crime and National Security</li><li>Vice <a href="https://www.vice.com/en/article/prominent-ai-philosopher-and-father-of-longtermism-sent-very-racist-email-to-a-90s-philosophy-listserv/">article</a> - Prominent AI Philosopher and ‘Father’ of Longtermism Sent Very Racist Email to a 90s Philosophy Listserv</li><li>LessWrong <a href="https://www.lesswrong.com/posts/jMzBhCRrr7otmqcvK/notkilleveryoneism-sounds-dumb">blogpost</a> - "notkilleveryoneism" sounds dumb (see comments)</li><li>EA Forum <a href="https://forum.effectivealtruism.org/posts/XdhwXppfqrpPL2YDX/an-overview-of-the-ai-safety-funding-situation">blogpost</a> - An Overview of the AI Safety Funding Situation</li><li><a href="https://link.springer.com/book/10.1007/978-3-319-24301-6">Book</a> by Dmitry Chernov and Didier Sornette - Man-made Catastrophes and Risk Information Concealment</li><li>Euronews <a href="https://www.euronews.com/next/2025/08/05/openai-adds-mental-health-safeguards-to-chatgpt-saying-chatbot-has-fed-into-users-delusion">article</a> - OpenAI adds mental health safeguards to ChatGPT, saying chatbot has fed into users’ ‘delusions’</li><li>Pleias <a href="https://pleias.fr/">website</a></li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/Jaywalking">page</a> on Jaywalking</li></ul>]]>
      </content:encoded>
      <pubDate>Thu, 21 Aug 2025 09:00:00 -0600</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/08948e9d/f91e5d6e.mp3" length="81180912" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>5069</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>We dig into how the concept of AI "safety" has been co-opted and weaponized by tech companies. Starting with examples like Mecha-Hitler Grok, we explore how real safety engineering differs from AI "alignment," the myth of the alignment tax, and why this semantic confusion matters for actual safety.</p><p></p><ul><li>(00:00) - Intro</li>
<li>(00:21) - Mecha-Hitler Grok</li>
<li>(10:07) - "Safety"</li>
<li>(19:40) - Under-specification</li>
<li>(53:56) - This time isn't different</li>
<li>(01:01:46) - Alignment Tax myth</li>
<li>(01:17:37) - Actually making AI safer</li>
</ul><strong><br>Links</strong><ul><li>JMLR <a href="https://www.jmlr.org/papers/v23/20-1335.html">article</a> - Underspecification Presents Challenges for Credibility in Modern Machine Learning</li><li>Trail of Bits <a href="https://github.com/trailofbits/publications/blob/master/papers/toward_comprehensive_risk_assessments.pdf">paper</a> - Towards Comprehensive Risk Assessments and Assurance of AI-Based Systems</li><li>SSRN <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4924942">paper</a> - Uniqueness Bias: Why It Matters, How to Curb It</li></ul><p><strong>Additional Referenced Papers</strong></p><ul><li>NeurIPS <a href="https://www.safetywashing.ai/">paper</a> - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?</li><li>ICML <a href="https://arxiv.org/abs/2312.06942">paper</a> - AI Control: Improving Safety Despite Intentional Subversion</li><li>ICML <a href="https://darkbench.ai/">paper</a> - DarkBench: Benchmarking Dark Patterns in Large Language Models</li><li>OSF <a href="https://osf.io/preprints/osf/ygx5q_v1">preprint</a> - Current Real-World Use of Large Language Models for Mental Health</li><li>Anthropic <a href="https://arxiv.org/pdf/2204.05862">preprint</a> - Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback</li></ul><p><strong>Inciting Examples</strong></p><ul><li>ars Technica <a href="https://arstechnica.com/tech-policy/2025/08/us-government-agency-drops-grok-after-mechahitler-backlash-report-says/">article</a> - US government agency drops Grok after MechaHitler backlash, report says</li><li>The Guardian <a href="https://www.theguardian.com/technology/2025/may/14/elon-musk-grok-white-genocide">article</a> - Musk’s AI Grok bot rants about ‘white genocide’ in South Africa in unrelated chats</li><li>BBC <a href="https://www.bbc.com/news/articles/cn4jnwdvg9qo">article</a> - Update that made ChatGPT 'dangerously' sycophantic pulled</li></ul><p><strong>Other Sources</strong></p><ul><li>London Daily <a href="https://londondaily.com/uk-ai-safety-institute-rebrands-as-ai-security-institute-to-focus-on-crime-and-national-security">article</a> - UK AI Safety Institute Rebrands as AI Security Institute to Focus on Crime and National Security</li><li>Vice <a href="https://www.vice.com/en/article/prominent-ai-philosopher-and-father-of-longtermism-sent-very-racist-email-to-a-90s-philosophy-listserv/">article</a> - Prominent AI Philosopher and ‘Father’ of Longtermism Sent Very Racist Email to a 90s Philosophy Listserv</li><li>LessWrong <a href="https://www.lesswrong.com/posts/jMzBhCRrr7otmqcvK/notkilleveryoneism-sounds-dumb">blogpost</a> - "notkilleveryoneism" sounds dumb (see comments)</li><li>EA Forum <a href="https://forum.effectivealtruism.org/posts/XdhwXppfqrpPL2YDX/an-overview-of-the-ai-safety-funding-situation">blogpost</a> - An Overview of the AI Safety Funding Situation</li><li><a href="https://link.springer.com/book/10.1007/978-3-319-24301-6">Book</a> by Dmitry Chernov and Didier Sornette - Man-made Catastrophes and Risk Information Concealment</li><li>Euronews <a href="https://www.euronews.com/next/2025/08/05/openai-adds-mental-health-safeguards-to-chatgpt-saying-chatbot-has-fed-into-users-delusion">article</a> - OpenAI adds mental health safeguards to ChatGPT, saying chatbot has fed into users’ ‘delusions’</li><li>Pleias <a href="https://pleias.fr/">website</a></li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/Jaywalking">page</a> on Jaywalking</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, AI Safety, artificial intelligence, safety, control, AI alignment, alignment tax</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:person role="Editor" href="https://muckraikers.transistor.fm/people/chase-precopia">Chase Precopia</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/08948e9d/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>AI, Reasoning or Rambling?</title>
      <itunes:episode>15</itunes:episode>
      <podcast:episode>15</podcast:episode>
      <itunes:title>AI, Reasoning or Rambling?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">72eb949b-8b5b-4a47-acc4-67785c481155</guid>
      <link>https://kairos.fm/muckraikers/e015</link>
      <description>
        <![CDATA[<p>In this episode, we redefine AI's "reasoning" as mere <em>rambling</em>, exposing the "illusion of thinking" and "Potemkin understanding" in current models. We contrast the classical definition of reasoning (requiring logic and consistency) with Big Tech's new version, which is a generic statement about information processing. We explain how Large Rambling Models generate extensive, often irrelevant, <em>rambling traces</em> that appear to improve benchmarks, largely due to best-of-N sampling and benchmark gaming.</p><p>Words and definitions actually matter! Carelessness leads to misplaced investments and an overestimation of systems that are currently just surprisingly useful autocorrects.</p><p></p><ul><li>(00:00) - Intro</li>
<li>(00:40) - OBB update and Meta's talent acquisition</li>
<li>(03:09) - What are rambling models?</li>
<li>(04:25) - Definitions and polarization</li>
<li>(09:50) - Logic and consistency</li>
<li>(17:00) - Why does this matter?</li>
<li>(21:40) - More likely explanations</li>
<li>(35:05) - The "illusion of thinking" and task complexity</li>
<li>(39:07) - "Potemkin understanding" and surface-level recall</li>
<li>(50:00) - Benchmark gaming and best-of-n sampling</li>
<li>(55:40) - Costs and limitations</li>
<li>(58:24) - Claude's anecdote and the Vending Bench</li>
<li>(01:03:05) - Definitional switch and implications</li>
<li>(01:10:18) - Outro</li>
</ul><strong><br>Links</strong><ul><li>Apple <a href="http://arxiv.org/abs/2506.06941">paper</a> - The Illusion of Thinking</li><li>ICML 2025 <a href="https://arxiv.org/abs/2506.21521">paper</a> - Potemkin Understanding in Large Language Models</li><li><a href="https://arxiv.org/abs/2407.21787">Preprint</a> - Large Language Monkeys: Scaling Inference Compute with Repeated Sampling</li></ul><p><strong>Theoretical understanding</strong></p><ul><li>Max M. Schlereth <a href="https://philarchive.org/rec/SCHAIM-14">Manuscript</a> - The limits of AGI part II</li><li><a href="https://arxiv.org/html/2504.09762v1">Preprint</a> - (How) Do Reasoning Models Reason?</li><li><a href="http://arxiv.org/abs/2503.03961">Preprint</a> - A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers</li><li>NeurIPS 2024 <a href="https://proceedings.neurips.cc/paper_files/paper/2024/hash/3107e4bdb658c79053d7ef59cbc804dd-Abstract-Conference.html">paper</a> - How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad</li></ul><p><strong>Empirical explanations</strong></p><ul><li><a href="https://arxiv.org/abs/2502.17578">Preprint</a> - How Do Large Language Monkeys Get Their Power (Laws)?</li><li>Andon Labs <a href="http://arxiv.org/abs/2502.15840">Preprint</a> - Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents</li><li>LeapLab, Tsinghua University and Shanghai Jiao Tong University <a href="http://arxiv.org/abs/2504.13837">paper</a> - Does Reinforcement Learning Really Incentivize Reasoning Capacity</li><li><a href="https://arxiv.org/abs/2505.13697">Preprint</a> - RL in Name Only? Analyzing the Structural Assumptions in RL post-training for LLMs</li><li><a href="https://arxiv.org/abs/2505.18623">Preprint</a> - Mind The Gap: Deep Learning Doesn't Learn Deeply</li><li><a href="https://arxiv.org/abs/2503.14499">Preprint</a> - Measuring AI Ability to Complete Long Tasks</li><li><a href="https://arxiv.org/abs/2410.05229">Preprint</a> - GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models</li></ul><p><strong>Other sources</strong></p><ul><li>Zuck's Haul <a href="https://zuckshaul.com/">webpage</a> - Meta's talent acquisition tracker<ul><li>Hacker News <a href="https://news.ycombinator.com/item?id=44477512">discussion</a> - Opinions from the AI community</li></ul></li><li>Interconnects <a href="https://www.interconnects.ai/p/the-rise-of-reasoning-machines">blogpost</a> - The rise of reasoning machines</li><li>Anthropic <a href="https://www.anthropic.com/research/project-vend-1">blog</a> - Project Vend: Can Claude run a small shop?</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we redefine AI's "reasoning" as mere <em>rambling</em>, exposing the "illusion of thinking" and "Potemkin understanding" in current models. We contrast the classical definition of reasoning (requiring logic and consistency) with Big Tech's new version, which is a generic statement about information processing. We explain how Large Rambling Models generate extensive, often irrelevant, <em>rambling traces</em> that appear to improve benchmarks, largely due to best-of-N sampling and benchmark gaming.</p><p>Words and definitions actually matter! Carelessness leads to misplaced investments and an overestimation of systems that are currently just surprisingly useful autocorrects.</p><p></p><ul><li>(00:00) - Intro</li>
<li>(00:40) - OBB update and Meta's talent acquisition</li>
<li>(03:09) - What are rambling models?</li>
<li>(04:25) - Definitions and polarization</li>
<li>(09:50) - Logic and consistency</li>
<li>(17:00) - Why does this matter?</li>
<li>(21:40) - More likely explanations</li>
<li>(35:05) - The "illusion of thinking" and task complexity</li>
<li>(39:07) - "Potemkin understanding" and surface-level recall</li>
<li>(50:00) - Benchmark gaming and best-of-n sampling</li>
<li>(55:40) - Costs and limitations</li>
<li>(58:24) - Claude's anecdote and the Vending Bench</li>
<li>(01:03:05) - Definitional switch and implications</li>
<li>(01:10:18) - Outro</li>
</ul><strong><br>Links</strong><ul><li>Apple <a href="http://arxiv.org/abs/2506.06941">paper</a> - The Illusion of Thinking</li><li>ICML 2025 <a href="https://arxiv.org/abs/2506.21521">paper</a> - Potemkin Understanding in Large Language Models</li><li><a href="https://arxiv.org/abs/2407.21787">Preprint</a> - Large Language Monkeys: Scaling Inference Compute with Repeated Sampling</li></ul><p><strong>Theoretical understanding</strong></p><ul><li>Max M. Schlereth <a href="https://philarchive.org/rec/SCHAIM-14">Manuscript</a> - The limits of AGI part II</li><li><a href="https://arxiv.org/html/2504.09762v1">Preprint</a> - (How) Do Reasoning Models Reason?</li><li><a href="http://arxiv.org/abs/2503.03961">Preprint</a> - A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers</li><li>NeurIPS 2024 <a href="https://proceedings.neurips.cc/paper_files/paper/2024/hash/3107e4bdb658c79053d7ef59cbc804dd-Abstract-Conference.html">paper</a> - How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad</li></ul><p><strong>Empirical explanations</strong></p><ul><li><a href="https://arxiv.org/abs/2502.17578">Preprint</a> - How Do Large Language Monkeys Get Their Power (Laws)?</li><li>Andon Labs <a href="http://arxiv.org/abs/2502.15840">Preprint</a> - Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents</li><li>LeapLab, Tsinghua University and Shanghai Jiao Tong University <a href="http://arxiv.org/abs/2504.13837">paper</a> - Does Reinforcement Learning Really Incentivize Reasoning Capacity</li><li><a href="https://arxiv.org/abs/2505.13697">Preprint</a> - RL in Name Only? Analyzing the Structural Assumptions in RL post-training for LLMs</li><li><a href="https://arxiv.org/abs/2505.18623">Preprint</a> - Mind The Gap: Deep Learning Doesn't Learn Deeply</li><li><a href="https://arxiv.org/abs/2503.14499">Preprint</a> - Measuring AI Ability to Complete Long Tasks</li><li><a href="https://arxiv.org/abs/2410.05229">Preprint</a> - GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models</li></ul><p><strong>Other sources</strong></p><ul><li>Zuck's Haul <a href="https://zuckshaul.com/">webpage</a> - Meta's talent acquisition tracker<ul><li>Hacker News <a href="https://news.ycombinator.com/item?id=44477512">discussion</a> - Opinions from the AI community</li></ul></li><li>Interconnects <a href="https://www.interconnects.ai/p/the-rise-of-reasoning-machines">blogpost</a> - The rise of reasoning machines</li><li>Anthropic <a href="https://www.anthropic.com/research/project-vend-1">blog</a> - Project Vend: Can Claude run a small shop?</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 14 Jul 2025 09:00:00 -0600</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/3ac97807/0cedd748.mp3" length="49419521" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>4268</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, we redefine AI's "reasoning" as mere <em>rambling</em>, exposing the "illusion of thinking" and "Potemkin understanding" in current models. We contrast the classical definition of reasoning (requiring logic and consistency) with Big Tech's new version, which is a generic statement about information processing. We explain how Large Rambling Models generate extensive, often irrelevant, <em>rambling traces</em> that appear to improve benchmarks, largely due to best-of-N sampling and benchmark gaming.</p><p>Words and definitions actually matter! Carelessness leads to misplaced investments and an overestimation of systems that are currently just surprisingly useful autocorrects.</p><p></p><ul><li>(00:00) - Intro</li>
<li>(00:40) - OBB update and Meta's talent acquisition</li>
<li>(03:09) - What are rambling models?</li>
<li>(04:25) - Definitions and polarization</li>
<li>(09:50) - Logic and consistency</li>
<li>(17:00) - Why does this matter?</li>
<li>(21:40) - More likely explanations</li>
<li>(35:05) - The "illusion of thinking" and task complexity</li>
<li>(39:07) - "Potemkin understanding" and surface-level recall</li>
<li>(50:00) - Benchmark gaming and best-of-n sampling</li>
<li>(55:40) - Costs and limitations</li>
<li>(58:24) - Claude's anecdote and the Vending Bench</li>
<li>(01:03:05) - Definitional switch and implications</li>
<li>(01:10:18) - Outro</li>
</ul><strong><br>Links</strong><ul><li>Apple <a href="http://arxiv.org/abs/2506.06941">paper</a> - The Illusion of Thinking</li><li>ICML 2025 <a href="https://arxiv.org/abs/2506.21521">paper</a> - Potemkin Understanding in Large Language Models</li><li><a href="https://arxiv.org/abs/2407.21787">Preprint</a> - Large Language Monkeys: Scaling Inference Compute with Repeated Sampling</li></ul><p><strong>Theoretical understanding</strong></p><ul><li>Max M. Schlereth <a href="https://philarchive.org/rec/SCHAIM-14">Manuscript</a> - The limits of AGI part II</li><li><a href="https://arxiv.org/html/2504.09762v1">Preprint</a> - (How) Do Reasoning Models Reason?</li><li><a href="http://arxiv.org/abs/2503.03961">Preprint</a> - A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers</li><li>NeurIPS 2024 <a href="https://proceedings.neurips.cc/paper_files/paper/2024/hash/3107e4bdb658c79053d7ef59cbc804dd-Abstract-Conference.html">paper</a> - How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad</li></ul><p><strong>Empirical explanations</strong></p><ul><li><a href="https://arxiv.org/abs/2502.17578">Preprint</a> - How Do Large Language Monkeys Get Their Power (Laws)?</li><li>Andon Labs <a href="http://arxiv.org/abs/2502.15840">Preprint</a> - Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents</li><li>LeapLab, Tsinghua University and Shanghai Jiao Tong University <a href="http://arxiv.org/abs/2504.13837">paper</a> - Does Reinforcement Learning Really Incentivize Reasoning Capacity</li><li><a href="https://arxiv.org/abs/2505.13697">Preprint</a> - RL in Name Only? Analyzing the Structural Assumptions in RL post-training for LLMs</li><li><a href="https://arxiv.org/abs/2505.18623">Preprint</a> - Mind The Gap: Deep Learning Doesn't Learn Deeply</li><li><a href="https://arxiv.org/abs/2503.14499">Preprint</a> - Measuring AI Ability to Complete Long Tasks</li><li><a href="https://arxiv.org/abs/2410.05229">Preprint</a> - GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models</li></ul><p><strong>Other sources</strong></p><ul><li>Zuck's Haul <a href="https://zuckshaul.com/">webpage</a> - Meta's talent acquisition tracker<ul><li>Hacker News <a href="https://news.ycombinator.com/item?id=44477512">discussion</a> - Opinions from the AI community</li></ul></li><li>Interconnects <a href="https://www.interconnects.ai/p/the-rise-of-reasoning-machines">blogpost</a> - The rise of reasoning machines</li><li>Anthropic <a href="https://www.anthropic.com/research/project-vend-1">blog</a> - Project Vend: Can Claude run a small shop?</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, AI reasoning, reasoning, reasoning models, AI understanding, potemkin understanding, illusions of thinking, Apple</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/3ac97807/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>One Big Bad Bill</title>
      <itunes:episode>14</itunes:episode>
      <podcast:episode>14</podcast:episode>
      <itunes:title>One Big Bad Bill</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9fa7bce2-15a0-44e4-bd2d-0515db8bb4f7</guid>
      <link>https://kairos.fm/muckraikers/e014/</link>
      <description>
        <![CDATA[<p>In this episode, we break down Trump's "One Big Beautiful Bill" and its dystopian AI provisions: automated fraud detection systems, centralized citizen databases, military AI integration, and a 10-year moratorium blocking all state AI regulation. We explore the historical parallels with authoritarian data consolidation and why this represents a fundamental shift away from limited government principles once held by US conservatives.</p><p></p><ul><li>(00:00) - Intro</li>
<li>(01:13) - Bill, general overview</li>
<li>(05:14) - Bill, AI overview</li>
<li>(07:54) - Medicaid fraud detection systems</li>
<li>(11:20) - Bias in AI Systems and Ethical Concerns</li>
<li>(17:58) - Centralization of data</li>
<li>(30:04) - Military integration of AI</li>
<li>(37:05) - Tax incentives for development</li>
<li>(40:57) - Regulatory moratorium</li>
<li>(47:58) - One big bad authoritarian regime</li>
</ul><br><strong>Links</strong><ul><li>Congress <a href="https://www.congress.gov/bill/119th-congress/house-bill/1/text">page</a> on the One Big Beautiful Bill Act</li><li>NYMag <a href="https://nymag.com/intelligencer/article/republicans-admit-they-didnt-read-their-big-beautiful-bill.html">article</a> - Republicans Admit They Didn’t Even Read Their Big Beautiful Bill</li><li>Everything is Horrible <a href="https://www.everythingishorrible.net/p/they-did-vote-for-this-gop-house?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F508bba10-2efa-4a3e-ba6c-8c9998dd53fa_998x1032.png&amp;open=false">Blogpost</a> - They Did Vote For This (GOP House Edition)</li></ul><p><strong>Authoritarianism</strong></p><ul><li>Historical context<ul><li>Holocaust Encyclopedia <a href="https://encyclopedia.ushmm.org/content/en/article/gleichschaltung-coordinating-the-nazi-state">article</a> - Gleichschaltung: Coordinating the Nazi State</li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/1943_Amsterdam_civil_registry_office_bombing">article</a> - 1943 Amsterdam civil registry office bombing</li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/Four_Ds#Decentralisation">article</a> - Four Ds</li></ul></li><li>Conservative leaning, pro-privacy, anti-government<ul><li>Data Governance Hub <a href="https://datagovhub.elliott.gwu.edu/review-and-literature-guide-of-trumps-one-big-beautiful-dataset/">blogpost</a> - Review and Literature Guide of Trump’s “One Big Beautiful Dataset”</li><li>Cato Institute <a href="https://www.cato.org/commentary/you-value-privacy-resist-any-form-national-id-cards">blogpost</a> - If You Value Privacy, Resist Any Form of National ID Cards</li><li>American Enterprise Intitute <a href="https://www.aei.org/technology-and-innovation/the-dangerous-road-to-a-master-file-why-linking-government-databases-is-a-terrible-idea/">blogpost</a> - The Dangerous Road to a “Master File”—Why Linking Government Databases Is a Terrible Idea</li><li>EFF <a href="https://www.eff.org/deeplinks/2025/06/dangers-consolidating-all-government-information">blogpost</a> - The Dangers of Consolidating All Government Information</li></ul></li><li>ACLU against national ID cards<ul><li>ACLU <a href="https://www.aclu.org/issues/privacy-technology/national-id">main page</a> on national ID cards</li><li>ACLU <a href="https://www.aclu.org/documents/national-identification-cards-why-does-aclu-oppose-national-id-system">blogpost</a> - National Identification Cards: Why Does the ACLU Oppose a National I.D. System?</li><li>ACLU <a href="https://www.aclu.org/documents/5-problems-national-id-cards">blogpost</a> - 5 Problems with National ID Cards</li></ul></li><li>Inherent unfairness of ML<ul><li>Lighthouse Reports <a href="https://www.lighthousereports.com/investigation/the-limits-of-ethical-ai/">investigation</a> - The Limits of Ethical AI</li><li>Lighthouse Reports <a href="https://www.lighthousereports.com/investigation/suspicion-machines/">investigation</a> - Suspicion Machines</li><li>Amazon Science <a href="https://www.amazon.science/publications/bias-preservation-in-machine-learning-the-legality-of-fairness-metrics-under-eu-non-discrimination-law">publication</a> - Bias preservation in machine learning: The legality of fairness metrics under EU non-discrimination law</li><li>Michigan Technology Law Review <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4331652">article</a> - The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by default</li><li>Wired <a href="https://www.wired.com/story/bias-statistics-artificial-intelligence-healthcare/">article</a> - Health Care Bias Is Dangerous. But So Are ‘Fairness’ Algorithms</li></ul></li></ul><p><strong>Military</strong></p><ul><li>WallStreet Journal <a href="https://www.wsj.com/tech/army-reserve-tech-executives-meta-palantir-796f5360">article</a> - The Army’s Newest Recruits: Tech Execs From Meta, OpenAI and More</li><li>Trump <a href="https://www.whitehouse.gov/presidential-actions/2025/06/unleashing-american-drone-dominance/">executive order</a> - Unleashing American Drone Dominance</li><li>Anthropic <a href="https://www.anthropic.com/news/claude-gov-models-for-u-s-national-security-customers">press release</a> - Claude Gov Models for U.S. National Security Customers</li></ul><p><strong>Moratorium on State AI Regulation</strong></p><ul><li>TechPolicy.Press <a href="https://www.techpolicy.press/the-state-ai-laws-likeliest-to-be-blocked-by-a-moratorium/">article</a> - The State AI Laws Likeliest To Be Blocked by a Moratorium</li><li>Forbes <a href="https://www.forbes.com/sites/alonzomartinez/2025/05/09/colorado-ai-law-update-fails/">article</a> - Colorado’s AI Law Still Stands After Update Effort Fails</li></ul><p><strong>Other Sources</strong></p><ul><li>KPMG <a href="https://kpmg.com/kpmg-us/content/dam/kpmg/taxnewsflash/pdf/2025/05/kpmg-report-credits-one-big-beautiful-bill-may-15-2025.pdf">report</a> - Incentives and credits tax provisions in “One Big Beautiful Bill Act”</li><li>The Register <a href="https://www.theregister.com/2025/06/10/trump_admin_leak_government_ai_plans/">article</a> - Trump team leaks AI plans in public GitHub repository</li><li>WallStreet Journal <a href="https://www.wsj.com/articles/to-feed-power-wolfing-ai-lawmakers-are-embracing-nuclear-a461ab7d">article</a> - To Feed Power-Wolfing AI, Lawmakers Are Embracing Nuclear</li><li>CBS Austin <a href="https://cbsaustin.com/news/nation-world/irs-direct-file-program-exceeded-its-expectations-but-faces-uncertain-future-internal-revenue-service-tax-returns-intuit-hr-block-refund-free-file-alliance">article</a> - IRS direct file program exceeded its expectations but faces uncertain future</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we break down Trump's "One Big Beautiful Bill" and its dystopian AI provisions: automated fraud detection systems, centralized citizen databases, military AI integration, and a 10-year moratorium blocking all state AI regulation. We explore the historical parallels with authoritarian data consolidation and why this represents a fundamental shift away from limited government principles once held by US conservatives.</p><p></p><ul><li>(00:00) - Intro</li>
<li>(01:13) - Bill, general overview</li>
<li>(05:14) - Bill, AI overview</li>
<li>(07:54) - Medicaid fraud detection systems</li>
<li>(11:20) - Bias in AI Systems and Ethical Concerns</li>
<li>(17:58) - Centralization of data</li>
<li>(30:04) - Military integration of AI</li>
<li>(37:05) - Tax incentives for development</li>
<li>(40:57) - Regulatory moratorium</li>
<li>(47:58) - One big bad authoritarian regime</li>
</ul><br><strong>Links</strong><ul><li>Congress <a href="https://www.congress.gov/bill/119th-congress/house-bill/1/text">page</a> on the One Big Beautiful Bill Act</li><li>NYMag <a href="https://nymag.com/intelligencer/article/republicans-admit-they-didnt-read-their-big-beautiful-bill.html">article</a> - Republicans Admit They Didn’t Even Read Their Big Beautiful Bill</li><li>Everything is Horrible <a href="https://www.everythingishorrible.net/p/they-did-vote-for-this-gop-house?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F508bba10-2efa-4a3e-ba6c-8c9998dd53fa_998x1032.png&amp;open=false">Blogpost</a> - They Did Vote For This (GOP House Edition)</li></ul><p><strong>Authoritarianism</strong></p><ul><li>Historical context<ul><li>Holocaust Encyclopedia <a href="https://encyclopedia.ushmm.org/content/en/article/gleichschaltung-coordinating-the-nazi-state">article</a> - Gleichschaltung: Coordinating the Nazi State</li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/1943_Amsterdam_civil_registry_office_bombing">article</a> - 1943 Amsterdam civil registry office bombing</li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/Four_Ds#Decentralisation">article</a> - Four Ds</li></ul></li><li>Conservative leaning, pro-privacy, anti-government<ul><li>Data Governance Hub <a href="https://datagovhub.elliott.gwu.edu/review-and-literature-guide-of-trumps-one-big-beautiful-dataset/">blogpost</a> - Review and Literature Guide of Trump’s “One Big Beautiful Dataset”</li><li>Cato Institute <a href="https://www.cato.org/commentary/you-value-privacy-resist-any-form-national-id-cards">blogpost</a> - If You Value Privacy, Resist Any Form of National ID Cards</li><li>American Enterprise Intitute <a href="https://www.aei.org/technology-and-innovation/the-dangerous-road-to-a-master-file-why-linking-government-databases-is-a-terrible-idea/">blogpost</a> - The Dangerous Road to a “Master File”—Why Linking Government Databases Is a Terrible Idea</li><li>EFF <a href="https://www.eff.org/deeplinks/2025/06/dangers-consolidating-all-government-information">blogpost</a> - The Dangers of Consolidating All Government Information</li></ul></li><li>ACLU against national ID cards<ul><li>ACLU <a href="https://www.aclu.org/issues/privacy-technology/national-id">main page</a> on national ID cards</li><li>ACLU <a href="https://www.aclu.org/documents/national-identification-cards-why-does-aclu-oppose-national-id-system">blogpost</a> - National Identification Cards: Why Does the ACLU Oppose a National I.D. System?</li><li>ACLU <a href="https://www.aclu.org/documents/5-problems-national-id-cards">blogpost</a> - 5 Problems with National ID Cards</li></ul></li><li>Inherent unfairness of ML<ul><li>Lighthouse Reports <a href="https://www.lighthousereports.com/investigation/the-limits-of-ethical-ai/">investigation</a> - The Limits of Ethical AI</li><li>Lighthouse Reports <a href="https://www.lighthousereports.com/investigation/suspicion-machines/">investigation</a> - Suspicion Machines</li><li>Amazon Science <a href="https://www.amazon.science/publications/bias-preservation-in-machine-learning-the-legality-of-fairness-metrics-under-eu-non-discrimination-law">publication</a> - Bias preservation in machine learning: The legality of fairness metrics under EU non-discrimination law</li><li>Michigan Technology Law Review <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4331652">article</a> - The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by default</li><li>Wired <a href="https://www.wired.com/story/bias-statistics-artificial-intelligence-healthcare/">article</a> - Health Care Bias Is Dangerous. But So Are ‘Fairness’ Algorithms</li></ul></li></ul><p><strong>Military</strong></p><ul><li>WallStreet Journal <a href="https://www.wsj.com/tech/army-reserve-tech-executives-meta-palantir-796f5360">article</a> - The Army’s Newest Recruits: Tech Execs From Meta, OpenAI and More</li><li>Trump <a href="https://www.whitehouse.gov/presidential-actions/2025/06/unleashing-american-drone-dominance/">executive order</a> - Unleashing American Drone Dominance</li><li>Anthropic <a href="https://www.anthropic.com/news/claude-gov-models-for-u-s-national-security-customers">press release</a> - Claude Gov Models for U.S. National Security Customers</li></ul><p><strong>Moratorium on State AI Regulation</strong></p><ul><li>TechPolicy.Press <a href="https://www.techpolicy.press/the-state-ai-laws-likeliest-to-be-blocked-by-a-moratorium/">article</a> - The State AI Laws Likeliest To Be Blocked by a Moratorium</li><li>Forbes <a href="https://www.forbes.com/sites/alonzomartinez/2025/05/09/colorado-ai-law-update-fails/">article</a> - Colorado’s AI Law Still Stands After Update Effort Fails</li></ul><p><strong>Other Sources</strong></p><ul><li>KPMG <a href="https://kpmg.com/kpmg-us/content/dam/kpmg/taxnewsflash/pdf/2025/05/kpmg-report-credits-one-big-beautiful-bill-may-15-2025.pdf">report</a> - Incentives and credits tax provisions in “One Big Beautiful Bill Act”</li><li>The Register <a href="https://www.theregister.com/2025/06/10/trump_admin_leak_government_ai_plans/">article</a> - Trump team leaks AI plans in public GitHub repository</li><li>WallStreet Journal <a href="https://www.wsj.com/articles/to-feed-power-wolfing-ai-lawmakers-are-embracing-nuclear-a461ab7d">article</a> - To Feed Power-Wolfing AI, Lawmakers Are Embracing Nuclear</li><li>CBS Austin <a href="https://cbsaustin.com/news/nation-world/irs-direct-file-program-exceeded-its-expectations-but-faces-uncertain-future-internal-revenue-service-tax-returns-intuit-hr-block-refund-free-file-alliance">article</a> - IRS direct file program exceeded its expectations but faces uncertain future</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 23 Jun 2025 10:00:00 -0600</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/4feaace0/94c7e8e2.mp3" length="37873371" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>3180</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, we break down Trump's "One Big Beautiful Bill" and its dystopian AI provisions: automated fraud detection systems, centralized citizen databases, military AI integration, and a 10-year moratorium blocking all state AI regulation. We explore the historical parallels with authoritarian data consolidation and why this represents a fundamental shift away from limited government principles once held by US conservatives.</p><p></p><ul><li>(00:00) - Intro</li>
<li>(01:13) - Bill, general overview</li>
<li>(05:14) - Bill, AI overview</li>
<li>(07:54) - Medicaid fraud detection systems</li>
<li>(11:20) - Bias in AI Systems and Ethical Concerns</li>
<li>(17:58) - Centralization of data</li>
<li>(30:04) - Military integration of AI</li>
<li>(37:05) - Tax incentives for development</li>
<li>(40:57) - Regulatory moratorium</li>
<li>(47:58) - One big bad authoritarian regime</li>
</ul><br><strong>Links</strong><ul><li>Congress <a href="https://www.congress.gov/bill/119th-congress/house-bill/1/text">page</a> on the One Big Beautiful Bill Act</li><li>NYMag <a href="https://nymag.com/intelligencer/article/republicans-admit-they-didnt-read-their-big-beautiful-bill.html">article</a> - Republicans Admit They Didn’t Even Read Their Big Beautiful Bill</li><li>Everything is Horrible <a href="https://www.everythingishorrible.net/p/they-did-vote-for-this-gop-house?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F508bba10-2efa-4a3e-ba6c-8c9998dd53fa_998x1032.png&amp;open=false">Blogpost</a> - They Did Vote For This (GOP House Edition)</li></ul><p><strong>Authoritarianism</strong></p><ul><li>Historical context<ul><li>Holocaust Encyclopedia <a href="https://encyclopedia.ushmm.org/content/en/article/gleichschaltung-coordinating-the-nazi-state">article</a> - Gleichschaltung: Coordinating the Nazi State</li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/1943_Amsterdam_civil_registry_office_bombing">article</a> - 1943 Amsterdam civil registry office bombing</li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/Four_Ds#Decentralisation">article</a> - Four Ds</li></ul></li><li>Conservative leaning, pro-privacy, anti-government<ul><li>Data Governance Hub <a href="https://datagovhub.elliott.gwu.edu/review-and-literature-guide-of-trumps-one-big-beautiful-dataset/">blogpost</a> - Review and Literature Guide of Trump’s “One Big Beautiful Dataset”</li><li>Cato Institute <a href="https://www.cato.org/commentary/you-value-privacy-resist-any-form-national-id-cards">blogpost</a> - If You Value Privacy, Resist Any Form of National ID Cards</li><li>American Enterprise Intitute <a href="https://www.aei.org/technology-and-innovation/the-dangerous-road-to-a-master-file-why-linking-government-databases-is-a-terrible-idea/">blogpost</a> - The Dangerous Road to a “Master File”—Why Linking Government Databases Is a Terrible Idea</li><li>EFF <a href="https://www.eff.org/deeplinks/2025/06/dangers-consolidating-all-government-information">blogpost</a> - The Dangers of Consolidating All Government Information</li></ul></li><li>ACLU against national ID cards<ul><li>ACLU <a href="https://www.aclu.org/issues/privacy-technology/national-id">main page</a> on national ID cards</li><li>ACLU <a href="https://www.aclu.org/documents/national-identification-cards-why-does-aclu-oppose-national-id-system">blogpost</a> - National Identification Cards: Why Does the ACLU Oppose a National I.D. System?</li><li>ACLU <a href="https://www.aclu.org/documents/5-problems-national-id-cards">blogpost</a> - 5 Problems with National ID Cards</li></ul></li><li>Inherent unfairness of ML<ul><li>Lighthouse Reports <a href="https://www.lighthousereports.com/investigation/the-limits-of-ethical-ai/">investigation</a> - The Limits of Ethical AI</li><li>Lighthouse Reports <a href="https://www.lighthousereports.com/investigation/suspicion-machines/">investigation</a> - Suspicion Machines</li><li>Amazon Science <a href="https://www.amazon.science/publications/bias-preservation-in-machine-learning-the-legality-of-fairness-metrics-under-eu-non-discrimination-law">publication</a> - Bias preservation in machine learning: The legality of fairness metrics under EU non-discrimination law</li><li>Michigan Technology Law Review <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4331652">article</a> - The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by default</li><li>Wired <a href="https://www.wired.com/story/bias-statistics-artificial-intelligence-healthcare/">article</a> - Health Care Bias Is Dangerous. But So Are ‘Fairness’ Algorithms</li></ul></li></ul><p><strong>Military</strong></p><ul><li>WallStreet Journal <a href="https://www.wsj.com/tech/army-reserve-tech-executives-meta-palantir-796f5360">article</a> - The Army’s Newest Recruits: Tech Execs From Meta, OpenAI and More</li><li>Trump <a href="https://www.whitehouse.gov/presidential-actions/2025/06/unleashing-american-drone-dominance/">executive order</a> - Unleashing American Drone Dominance</li><li>Anthropic <a href="https://www.anthropic.com/news/claude-gov-models-for-u-s-national-security-customers">press release</a> - Claude Gov Models for U.S. National Security Customers</li></ul><p><strong>Moratorium on State AI Regulation</strong></p><ul><li>TechPolicy.Press <a href="https://www.techpolicy.press/the-state-ai-laws-likeliest-to-be-blocked-by-a-moratorium/">article</a> - The State AI Laws Likeliest To Be Blocked by a Moratorium</li><li>Forbes <a href="https://www.forbes.com/sites/alonzomartinez/2025/05/09/colorado-ai-law-update-fails/">article</a> - Colorado’s AI Law Still Stands After Update Effort Fails</li></ul><p><strong>Other Sources</strong></p><ul><li>KPMG <a href="https://kpmg.com/kpmg-us/content/dam/kpmg/taxnewsflash/pdf/2025/05/kpmg-report-credits-one-big-beautiful-bill-may-15-2025.pdf">report</a> - Incentives and credits tax provisions in “One Big Beautiful Bill Act”</li><li>The Register <a href="https://www.theregister.com/2025/06/10/trump_admin_leak_government_ai_plans/">article</a> - Trump team leaks AI plans in public GitHub repository</li><li>WallStreet Journal <a href="https://www.wsj.com/articles/to-feed-power-wolfing-ai-lawmakers-are-embracing-nuclear-a461ab7d">article</a> - To Feed Power-Wolfing AI, Lawmakers Are Embracing Nuclear</li><li>CBS Austin <a href="https://cbsaustin.com/news/nation-world/irs-direct-file-program-exceeded-its-expectations-but-faces-uncertain-future-internal-revenue-service-tax-returns-intuit-hr-block-refund-free-file-alliance">article</a> - IRS direct file program exceeded its expectations but faces uncertain future</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, AI governance, One Big Beautiful Bill, Trump, One Big Beautiful Dataset, military AI, United States AI, technology</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/4feaace0/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/4feaace0/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Breaking Down the Economics of AI</title>
      <itunes:episode>13</itunes:episode>
      <podcast:episode>13</podcast:episode>
      <itunes:title>Breaking Down the Economics of AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f282e506-4ef8-45b7-80c2-172c77e12690</guid>
      <link>https://kairos.fm/muckraikers/e013/</link>
      <description>
        <![CDATA[<p>Jacob and Igor tackle the wild claims about AI's economic impact by examining three main clusters of arguments: automating expensive tasks like programming, removing "cost centers" like call centers and corporate art, and claims of explosive growth. They dig into the actual data, debunk the hype, and explain why most productivity claims don't hold up in practice. Plus: MIT denounces a paper with fabricated data, and Grok randomly promotes white genocide myths.</p><p><br></p><ul><li>(00:00) - Recording date + intro</li>
<li>(00:52) - MIT denounces paper</li>
<li>(04:09) - Grok's white genocide</li>
<li>(06:23) - Butthole convergence</li>
<li>(07:13) - AI and the economy</li>
<li>(14:50) - Automating profit centers</li>
<li>(29:46) - Removing the last cost centers</li>
<li>(47:16) - "This time is different" (explosive growth)</li>
<li>(57:55) - Alpha Evolve, optimization, and slippage</li>
</ul><br><strong><br>Links</strong><ul><li>University of Chicago <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933">working paper</a> - Large Language Models, Small Labor Market Effects</li><li>OECD <a href="https://www.oecd.org/en/publications/miracle-or-myth-assessing-the-macroeconomic-productivity-gains-from-artificial-intelligence_b524a072-en.html">working paper</a> - Miracle or Myth? Assessing the macroeconomic productivity gains from Artificial Intelligence</li><li>Epoch AI <a href="https://epoch.ai/blog/explosive-growth-from-ai-a-review-of-the-arguments">blogpost</a> - Explosive Growth from AI: A Review of the Arguments</li><li>Business Insider <a href="https://www.businessinsider.com/anthropic-ceo-ai-90-percent-code-3-to-6-months-2025-3">article</a> - Anthropic CEO: AI Will Be Writing 90% of Code in 3 to 6 Months</li><li><a href="https://arxiv.org/abs/2306.02519">Preprint</a> - Transformative AGI by 2043 is &lt;1% likely</li></ul><p><strong>Automating profit centers</strong></p><ul><li>Pivot to AI <a href="https://pivot-to-ai.com/2025/05/13/if-ai-is-so-good-at-coding-where-are-the-open-source-contributions/">blogpost</a> - If AI is so good at coding … where are the open source contributions?</li><li>Ben Evans' <a href="https://mastodon.social/@kittylyst/114397697851381604">Mastodon post</a> - "Show me the pull requests"</li><li>NY Times <a href="https://www.nytimes.com/2025/05/14/technology/ai-jobs-radiologists-mayo-clinic.html#:~:text=Nine%20years%20ago%2C%20one%20of,outperform%20humans%20in%20that%20field">article</a> - Your A.I. Radiologist Will Not Be With You Soon</li><li>FastCompany <a href="https://www.fastcompany.com/91325384/companies-adopting-ai-first-strategies-environmental-impact-duolingo-shopify">article</a> - More companies are adopting 'AI-first' strategies. Here's how it could impact the environment</li><li>Forbes <a href="https://www.forbes.com/sites/quickerbettertech/2025/04/13/business-tech-news-shopify-ceo-says-ai-first-before-employees/">article</a> - Business Tech News: Shopify CEO Says AI First Before Employees</li><li>Newsroom <a href="https://newsroom.ibm.com/2025-05-06-ibm-study-ceos-double-down-on-ai-while-navigating-enterprise-hurdles">article</a> - IBM Study: CEOs Double Down on AI While Navigating Enterprise Hurdles</li><li>PNAS <a href="https://www.pnas.org/doi/10.1073/pnas.2426766122">research article</a> - Evidence of a social evaluation penalty for using AI</li><li>Ars Technica <a href="https://arstechnica.com/ai/2025/05/ai-use-damages-professional-reputation-study-suggests/">article</a> - AI use damages professional reputation, study suggests</li></ul><p><strong>Removing cost centers</strong></p><ul><li>The Register <a href="https://www.theregister.com/2025/05/15/anthopics_law_firm_blames_claude_hallucinations/">article</a> - Anthopic's law firm blames Claude hallucinations for errors</li><li>Fortune <a href="https://fortune.com/2025/05/09/klarna-ai-humans-return-on-investment/">article</a> - Klarna plans to hire humans again, as new landmark survey reveals most AI projects fail to deliver</li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/The_Market_for_Lemons">article</a> - The Market for Lemons</li></ul><p><strong>AlphaEvolve</strong></p><ul><li>Deepmind <a href="https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/">press release</a> - AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms</li><li>Deepmind <a href="https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf">white paper</a> - AlphaEvolve: A coding agent for scientific and algorithmic discovery</li></ul><p><strong>Off Topic</strong></p><ul><li>VelvetShark <a href="https://velvetshark.com/ai-company-logos-that-look-like-buttholes">blogpost</a> - Why do AI company logos look like buttholes?</li><li>MIT Economics <a href="https://economics.mit.edu/news/assuring-accurate-research-record">press release</a> - Assuring an accurate research record</li><li>Pivot to AI <a href="https://pivot-to-ai.com/2025/05/17/how-to-make-a-splash-in-ai-economics-fake-your-data/">blogpost</a> - How to make a splash in AI economics: fake your data</li><li>Pivot to AI <a href="https://pivot-to-ai.com/2025/05/15/even-elon-musk-cant-make-grok-claim-a-white-genocide-in-south-africa/">blogpost</a> - Even Elon Musk can’t make Grok claim a ‘white genocide’ in South Africa</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Jacob and Igor tackle the wild claims about AI's economic impact by examining three main clusters of arguments: automating expensive tasks like programming, removing "cost centers" like call centers and corporate art, and claims of explosive growth. They dig into the actual data, debunk the hype, and explain why most productivity claims don't hold up in practice. Plus: MIT denounces a paper with fabricated data, and Grok randomly promotes white genocide myths.</p><p><br></p><ul><li>(00:00) - Recording date + intro</li>
<li>(00:52) - MIT denounces paper</li>
<li>(04:09) - Grok's white genocide</li>
<li>(06:23) - Butthole convergence</li>
<li>(07:13) - AI and the economy</li>
<li>(14:50) - Automating profit centers</li>
<li>(29:46) - Removing the last cost centers</li>
<li>(47:16) - "This time is different" (explosive growth)</li>
<li>(57:55) - Alpha Evolve, optimization, and slippage</li>
</ul><br><strong><br>Links</strong><ul><li>University of Chicago <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933">working paper</a> - Large Language Models, Small Labor Market Effects</li><li>OECD <a href="https://www.oecd.org/en/publications/miracle-or-myth-assessing-the-macroeconomic-productivity-gains-from-artificial-intelligence_b524a072-en.html">working paper</a> - Miracle or Myth? Assessing the macroeconomic productivity gains from Artificial Intelligence</li><li>Epoch AI <a href="https://epoch.ai/blog/explosive-growth-from-ai-a-review-of-the-arguments">blogpost</a> - Explosive Growth from AI: A Review of the Arguments</li><li>Business Insider <a href="https://www.businessinsider.com/anthropic-ceo-ai-90-percent-code-3-to-6-months-2025-3">article</a> - Anthropic CEO: AI Will Be Writing 90% of Code in 3 to 6 Months</li><li><a href="https://arxiv.org/abs/2306.02519">Preprint</a> - Transformative AGI by 2043 is &lt;1% likely</li></ul><p><strong>Automating profit centers</strong></p><ul><li>Pivot to AI <a href="https://pivot-to-ai.com/2025/05/13/if-ai-is-so-good-at-coding-where-are-the-open-source-contributions/">blogpost</a> - If AI is so good at coding … where are the open source contributions?</li><li>Ben Evans' <a href="https://mastodon.social/@kittylyst/114397697851381604">Mastodon post</a> - "Show me the pull requests"</li><li>NY Times <a href="https://www.nytimes.com/2025/05/14/technology/ai-jobs-radiologists-mayo-clinic.html#:~:text=Nine%20years%20ago%2C%20one%20of,outperform%20humans%20in%20that%20field">article</a> - Your A.I. Radiologist Will Not Be With You Soon</li><li>FastCompany <a href="https://www.fastcompany.com/91325384/companies-adopting-ai-first-strategies-environmental-impact-duolingo-shopify">article</a> - More companies are adopting 'AI-first' strategies. Here's how it could impact the environment</li><li>Forbes <a href="https://www.forbes.com/sites/quickerbettertech/2025/04/13/business-tech-news-shopify-ceo-says-ai-first-before-employees/">article</a> - Business Tech News: Shopify CEO Says AI First Before Employees</li><li>Newsroom <a href="https://newsroom.ibm.com/2025-05-06-ibm-study-ceos-double-down-on-ai-while-navigating-enterprise-hurdles">article</a> - IBM Study: CEOs Double Down on AI While Navigating Enterprise Hurdles</li><li>PNAS <a href="https://www.pnas.org/doi/10.1073/pnas.2426766122">research article</a> - Evidence of a social evaluation penalty for using AI</li><li>Ars Technica <a href="https://arstechnica.com/ai/2025/05/ai-use-damages-professional-reputation-study-suggests/">article</a> - AI use damages professional reputation, study suggests</li></ul><p><strong>Removing cost centers</strong></p><ul><li>The Register <a href="https://www.theregister.com/2025/05/15/anthopics_law_firm_blames_claude_hallucinations/">article</a> - Anthopic's law firm blames Claude hallucinations for errors</li><li>Fortune <a href="https://fortune.com/2025/05/09/klarna-ai-humans-return-on-investment/">article</a> - Klarna plans to hire humans again, as new landmark survey reveals most AI projects fail to deliver</li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/The_Market_for_Lemons">article</a> - The Market for Lemons</li></ul><p><strong>AlphaEvolve</strong></p><ul><li>Deepmind <a href="https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/">press release</a> - AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms</li><li>Deepmind <a href="https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf">white paper</a> - AlphaEvolve: A coding agent for scientific and algorithmic discovery</li></ul><p><strong>Off Topic</strong></p><ul><li>VelvetShark <a href="https://velvetshark.com/ai-company-logos-that-look-like-buttholes">blogpost</a> - Why do AI company logos look like buttholes?</li><li>MIT Economics <a href="https://economics.mit.edu/news/assuring-accurate-research-record">press release</a> - Assuring an accurate research record</li><li>Pivot to AI <a href="https://pivot-to-ai.com/2025/05/17/how-to-make-a-splash-in-ai-economics-fake-your-data/">blogpost</a> - How to make a splash in AI economics: fake your data</li><li>Pivot to AI <a href="https://pivot-to-ai.com/2025/05/15/even-elon-musk-cant-make-grok-claim-a-white-genocide-in-south-africa/">blogpost</a> - Even Elon Musk can’t make Grok claim a ‘white genocide’ in South Africa</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 26 May 2025 10:00:00 -0600</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/baa6d229/185f1eb4.mp3" length="52105100" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>4007</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Jacob and Igor tackle the wild claims about AI's economic impact by examining three main clusters of arguments: automating expensive tasks like programming, removing "cost centers" like call centers and corporate art, and claims of explosive growth. They dig into the actual data, debunk the hype, and explain why most productivity claims don't hold up in practice. Plus: MIT denounces a paper with fabricated data, and Grok randomly promotes white genocide myths.</p><p><br></p><ul><li>(00:00) - Recording date + intro</li>
<li>(00:52) - MIT denounces paper</li>
<li>(04:09) - Grok's white genocide</li>
<li>(06:23) - Butthole convergence</li>
<li>(07:13) - AI and the economy</li>
<li>(14:50) - Automating profit centers</li>
<li>(29:46) - Removing the last cost centers</li>
<li>(47:16) - "This time is different" (explosive growth)</li>
<li>(57:55) - Alpha Evolve, optimization, and slippage</li>
</ul><br><strong><br>Links</strong><ul><li>University of Chicago <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933">working paper</a> - Large Language Models, Small Labor Market Effects</li><li>OECD <a href="https://www.oecd.org/en/publications/miracle-or-myth-assessing-the-macroeconomic-productivity-gains-from-artificial-intelligence_b524a072-en.html">working paper</a> - Miracle or Myth? Assessing the macroeconomic productivity gains from Artificial Intelligence</li><li>Epoch AI <a href="https://epoch.ai/blog/explosive-growth-from-ai-a-review-of-the-arguments">blogpost</a> - Explosive Growth from AI: A Review of the Arguments</li><li>Business Insider <a href="https://www.businessinsider.com/anthropic-ceo-ai-90-percent-code-3-to-6-months-2025-3">article</a> - Anthropic CEO: AI Will Be Writing 90% of Code in 3 to 6 Months</li><li><a href="https://arxiv.org/abs/2306.02519">Preprint</a> - Transformative AGI by 2043 is &lt;1% likely</li></ul><p><strong>Automating profit centers</strong></p><ul><li>Pivot to AI <a href="https://pivot-to-ai.com/2025/05/13/if-ai-is-so-good-at-coding-where-are-the-open-source-contributions/">blogpost</a> - If AI is so good at coding … where are the open source contributions?</li><li>Ben Evans' <a href="https://mastodon.social/@kittylyst/114397697851381604">Mastodon post</a> - "Show me the pull requests"</li><li>NY Times <a href="https://www.nytimes.com/2025/05/14/technology/ai-jobs-radiologists-mayo-clinic.html#:~:text=Nine%20years%20ago%2C%20one%20of,outperform%20humans%20in%20that%20field">article</a> - Your A.I. Radiologist Will Not Be With You Soon</li><li>FastCompany <a href="https://www.fastcompany.com/91325384/companies-adopting-ai-first-strategies-environmental-impact-duolingo-shopify">article</a> - More companies are adopting 'AI-first' strategies. Here's how it could impact the environment</li><li>Forbes <a href="https://www.forbes.com/sites/quickerbettertech/2025/04/13/business-tech-news-shopify-ceo-says-ai-first-before-employees/">article</a> - Business Tech News: Shopify CEO Says AI First Before Employees</li><li>Newsroom <a href="https://newsroom.ibm.com/2025-05-06-ibm-study-ceos-double-down-on-ai-while-navigating-enterprise-hurdles">article</a> - IBM Study: CEOs Double Down on AI While Navigating Enterprise Hurdles</li><li>PNAS <a href="https://www.pnas.org/doi/10.1073/pnas.2426766122">research article</a> - Evidence of a social evaluation penalty for using AI</li><li>Ars Technica <a href="https://arstechnica.com/ai/2025/05/ai-use-damages-professional-reputation-study-suggests/">article</a> - AI use damages professional reputation, study suggests</li></ul><p><strong>Removing cost centers</strong></p><ul><li>The Register <a href="https://www.theregister.com/2025/05/15/anthopics_law_firm_blames_claude_hallucinations/">article</a> - Anthopic's law firm blames Claude hallucinations for errors</li><li>Fortune <a href="https://fortune.com/2025/05/09/klarna-ai-humans-return-on-investment/">article</a> - Klarna plans to hire humans again, as new landmark survey reveals most AI projects fail to deliver</li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/The_Market_for_Lemons">article</a> - The Market for Lemons</li></ul><p><strong>AlphaEvolve</strong></p><ul><li>Deepmind <a href="https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/">press release</a> - AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms</li><li>Deepmind <a href="https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf">white paper</a> - AlphaEvolve: A coding agent for scientific and algorithmic discovery</li></ul><p><strong>Off Topic</strong></p><ul><li>VelvetShark <a href="https://velvetshark.com/ai-company-logos-that-look-like-buttholes">blogpost</a> - Why do AI company logos look like buttholes?</li><li>MIT Economics <a href="https://economics.mit.edu/news/assuring-accurate-research-record">press release</a> - Assuring an accurate research record</li><li>Pivot to AI <a href="https://pivot-to-ai.com/2025/05/17/how-to-make-a-splash-in-ai-economics-fake-your-data/">blogpost</a> - How to make a splash in AI economics: fake your data</li><li>Pivot to AI <a href="https://pivot-to-ai.com/2025/05/15/even-elon-musk-cant-make-grok-claim-a-white-genocide-in-south-africa/">blogpost</a> - Even Elon Musk can’t make Grok claim a ‘white genocide’ in South Africa</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/baa6d229/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>DeepSeek: 2 Months Out</title>
      <itunes:episode>12</itunes:episode>
      <podcast:episode>12</podcast:episode>
      <itunes:title>DeepSeek: 2 Months Out</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">410597f1-320e-4b8e-af6f-86452af663e2</guid>
      <link>https://kairos.fm/e012/</link>
      <description>
        <![CDATA[<p>DeepSeek has been out for over 2 months now, and things have begun to settle down. We take this opportunity to contextualize the developments that have occurred in its wake, both within the AI industry and the world economy. As systems get more "agentic" and users are willing to spend increasing amounts of time waiting for their outputs, the value of supposed "reasoning" models continues to be peddled by AI system developers, but does the data really back these claims?</p><p>Check out our DeepSeek <a href="https://kairos.fm/muckraikers/b012/">minisode</a> for a snappier overview!</p><p>EPISODE RECORDED 2025.03.30</p><p><br></p><ul><li>(00:40) - DeepSeek R1 recap</li>
<li>(02:46) - What makes it new?</li>
<li>(08:53) - What is reasoning?</li>
<li>(14:51) - Limitations of reasoning models (why we hate reasoning)</li>
<li>(31:16) - Claims about R1 training on Open AI</li>
<li>(37:30) - “Deep Research”</li>
<li>(49:13) - Developments and drama in the AI industry</li>
<li>(56:26) - Proposed economic value</li>
<li>(01:14:20) - US government involvement</li>
<li>(01:23:28) - OpenAI uses MCP</li>
<li>(01:28:15) - Outro</li>
</ul><br><strong><br>Links</strong><ul><li>DeepSeek <a href="https://www.deepseek.com/">website</a></li><li>DeepSeek <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf">paper</a></li><li>DeepSeek <a href="https://api-docs.deepseek.com/quick_start/pricing">docs</a> - Models and Pricing</li><li>DeepSeek <a href="https://github.com/deepseek-ai/3FS">repo</a> - 3FS</li></ul><p><strong>Understanding DeepSeek/DeepResearch</strong></p><ul><li>Explainers<ul><li>Language Models &amp; Co. <a href="https://newsletter.languagemodels.co/p/the-illustrated-deepseek-r1">article</a> - The Illustrated DeepSeek-R1</li><li>Towards Data Science <a href="https://towardsdatascience.com/deepseek-v3-explained-1-multi-head-latent-attention-ed6bee2a67c4/">article</a> - DeepSeek-V3 Explained 1: Multi-head Latent Attention</li><li>Jina.ai <a href="https://jina.ai/news/a-practical-guide-to-implementing-deepsearch-deepresearch/">article</a> - A Practical Guide to Implementing DeepSearch/DeepResearch</li><li>Han, Not Solo <a href="https://leehanchung.github.io/blogs/2025/02/26/deep-research/">blogpost</a> - The Differences between Deep Research, Deep Research, and Deep Research</li></ul></li><li>Analysis and Research<ul><li><a href="https://arxiv.org/abs/2503.20783">Preprint</a> - Understanding R1-Zero-Like Training: A Critical Perspective</li><li><a href="https://oatllm.notion.site/oat-zero">Blogpost</a> - There May Not be Aha Moment in R1-Zero-like Training — A Pilot Study</li><li><a href="https://arxiv.org/abs/2407.21787">Preprint</a> - Large Language Monkeys: Scaling Inference Compute with Repeated Sampling</li><li><a href="https://arxiv.org/abs/2503.08679">Preprint</a> - Chain-of-Thought Reasoning In The Wild Is Not Always Faithful</li></ul></li></ul><p><strong>Fallout coverage</strong></p><ul><li>TechCrunch <a href="https://techcrunch.com/2025/03/13/openai-calls-deepseek-state-controlled-calls-for-bans-on-prc-produced-models/">article</a> - OpenAI calls DeepSeek 'state-controlled,' calls for bans on 'PRC-produced' models</li><li>The Verge <a href="https://www.theverge.com/news/601195/openai-evidence-deepseek-distillation-ai-data">article</a> - OpenAI has evidence that its models helped train China’s DeepSeek</li><li>Interesting Engineer <a href="https://interestingengineering.com/culture/deepseeks-ai-training-cost-billion?group=test_a">article</a> - $6M myth: DeepSeek’s true AI cost is 216x higher at $1.3B, research reveals</li><li>Ars Technica <a href="https://arstechnica.com/ai/2025/01/microsoft-embraces-openai-competitor-deepseek-on-its-ai-hosting-service/">article</a> - Microsoft now hosts AI model accused of copying OpenAI data</li><li>The Signal <a href="https://signalscv.com/2025/01/nvidia-loses-nearly-600-billion-in-deepseek-crash/">article</a> - Nvidia loses nearly $600 billion in DeepSeek crash</li><li>Yahoo Finance <a href="https://finance.yahoo.com/news/the-magnificent-7-stocks-are-having-their-worst-quarter-in-more-than-2-years-190335944.html">article</a> - The 'Magnificent 7' stocks are having their worst quarter in more than 2 years</li><li>Reuters <a href="https://www.reuters.com/technology/microsoft-pulls-back-more-data-center-leases-us-europe-analysts-say-2025-03-26/">article</a> - Microsoft pulls back from more data center leases in US and Europe, analysts say</li></ul><p><strong>US governance</strong></p><ul><li>National Law Review <a href="https://natlawreview.com/article/three-states-ban-deepseek-use-state-devices-and-networks">article</a> - Three States Ban DeepSeek Use on State Devices and Networks</li><li>CNN <a href="https://edition.cnn.com/2025/02/06/tech/deepseek-ai-us-ban-bill/index.html">article</a> - US lawmakers want to ban DeepSeek from government devices</li><li>House <a href="https://www.congress.gov/bill/119th-congress/house-bill/1121">bill</a> - No DeepSeek on Government Devices Act</li><li>Senate <a href="https://www.congress.gov/bill/119th-congress/senate-bill/321">bill</a> - Decoupling America's Artificial Intelligence Capabilities from China Act of 2025</li></ul><p><strong>Leaderboards</strong></p><ul><li><a href="https://aider.chat/docs/leaderboards/">aider</a></li><li><a href="https://livebench.ai/#/">LiveBench</a></li><li><a href="https://huggingface.co/spaces/lmarena-ai/chatbot-arena-leaderboard">LM Arena</a></li><li><a href="https://www.kaggle.com/competitions/konwinski-prize/leaderboard">Konwinski Prize</a></li><li><a href="https://arxiv.org/abs/2410.06992">Preprint</a> - SWE-Bench+: Enhanced Coding Benchmark for LLMs</li><li>Cybernews <a href="https://cybernews.com/ai-news/openai-measures-model-engineering-benchmarks-real-upwork-tasks/">article</a> - OpenAI study proves LLMs still behind human engineers in over 1400 real-world tasks</li></ul><p><strong>Other References</strong></p><ul><li>Anthropic <a href="https://www.anthropic.com/news/the-anthropic-economic-index">report</a> - The Anthropic Economic Index</li><li>METR <a href="https://arxiv.org/abs/2503.14499">Report</a> - Measuring AI Ability to Complete Long Tasks</li><li>The Information <a href="https://www.theinformation.com/articles/openai-discusses-building-first-data-center-storage">article</a> - OpenAI Discusses Building Its First Data Center for Storage<ul><li>Deepmind <a href="https://arxiv.org/abs/2112.04426">report</a> backing up this idea</li></ul></li><li>TechCrunch <a href="https://techcrunch.com/2025/03/26/openai-adopts-rival-anthropics-standard-for-connecting-ai-models-to-data/">article</a> - OpenAI adopts rival Anthropic's standard for connecting AI models to data</li><li>Reuters <a href="https://www.reuters.com/technology/artificial-intelligence/openai-meta-talks-with-reliance-ai-partnerships-information-reports-2025-03-22/">article</a> - OpenAI, Meta in talks with Reliance for AI partnerships, The Information reports</li><li>2024 AI Index <a href="https://hai.stanford.edu/ai-index/2024-ai-index-report">report</a></li><li>NDTV <a href="https://www.ndtv.com/world-news/ghibli-style-images-to-memes-white-house-embraces-alt-right-online-culture-8066199">article</a> - Ghibli-Style Images To Memes: White House Embraces Alt-Right Online Culture</li><li>Elk <a href="https://elk.zone/carhenge.club/@skiles/114203147063483693">post</a> on DOGE and AI</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>DeepSeek has been out for over 2 months now, and things have begun to settle down. We take this opportunity to contextualize the developments that have occurred in its wake, both within the AI industry and the world economy. As systems get more "agentic" and users are willing to spend increasing amounts of time waiting for their outputs, the value of supposed "reasoning" models continues to be peddled by AI system developers, but does the data really back these claims?</p><p>Check out our DeepSeek <a href="https://kairos.fm/muckraikers/b012/">minisode</a> for a snappier overview!</p><p>EPISODE RECORDED 2025.03.30</p><p><br></p><ul><li>(00:40) - DeepSeek R1 recap</li>
<li>(02:46) - What makes it new?</li>
<li>(08:53) - What is reasoning?</li>
<li>(14:51) - Limitations of reasoning models (why we hate reasoning)</li>
<li>(31:16) - Claims about R1 training on Open AI</li>
<li>(37:30) - “Deep Research”</li>
<li>(49:13) - Developments and drama in the AI industry</li>
<li>(56:26) - Proposed economic value</li>
<li>(01:14:20) - US government involvement</li>
<li>(01:23:28) - OpenAI uses MCP</li>
<li>(01:28:15) - Outro</li>
</ul><br><strong><br>Links</strong><ul><li>DeepSeek <a href="https://www.deepseek.com/">website</a></li><li>DeepSeek <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf">paper</a></li><li>DeepSeek <a href="https://api-docs.deepseek.com/quick_start/pricing">docs</a> - Models and Pricing</li><li>DeepSeek <a href="https://github.com/deepseek-ai/3FS">repo</a> - 3FS</li></ul><p><strong>Understanding DeepSeek/DeepResearch</strong></p><ul><li>Explainers<ul><li>Language Models &amp; Co. <a href="https://newsletter.languagemodels.co/p/the-illustrated-deepseek-r1">article</a> - The Illustrated DeepSeek-R1</li><li>Towards Data Science <a href="https://towardsdatascience.com/deepseek-v3-explained-1-multi-head-latent-attention-ed6bee2a67c4/">article</a> - DeepSeek-V3 Explained 1: Multi-head Latent Attention</li><li>Jina.ai <a href="https://jina.ai/news/a-practical-guide-to-implementing-deepsearch-deepresearch/">article</a> - A Practical Guide to Implementing DeepSearch/DeepResearch</li><li>Han, Not Solo <a href="https://leehanchung.github.io/blogs/2025/02/26/deep-research/">blogpost</a> - The Differences between Deep Research, Deep Research, and Deep Research</li></ul></li><li>Analysis and Research<ul><li><a href="https://arxiv.org/abs/2503.20783">Preprint</a> - Understanding R1-Zero-Like Training: A Critical Perspective</li><li><a href="https://oatllm.notion.site/oat-zero">Blogpost</a> - There May Not be Aha Moment in R1-Zero-like Training — A Pilot Study</li><li><a href="https://arxiv.org/abs/2407.21787">Preprint</a> - Large Language Monkeys: Scaling Inference Compute with Repeated Sampling</li><li><a href="https://arxiv.org/abs/2503.08679">Preprint</a> - Chain-of-Thought Reasoning In The Wild Is Not Always Faithful</li></ul></li></ul><p><strong>Fallout coverage</strong></p><ul><li>TechCrunch <a href="https://techcrunch.com/2025/03/13/openai-calls-deepseek-state-controlled-calls-for-bans-on-prc-produced-models/">article</a> - OpenAI calls DeepSeek 'state-controlled,' calls for bans on 'PRC-produced' models</li><li>The Verge <a href="https://www.theverge.com/news/601195/openai-evidence-deepseek-distillation-ai-data">article</a> - OpenAI has evidence that its models helped train China’s DeepSeek</li><li>Interesting Engineer <a href="https://interestingengineering.com/culture/deepseeks-ai-training-cost-billion?group=test_a">article</a> - $6M myth: DeepSeek’s true AI cost is 216x higher at $1.3B, research reveals</li><li>Ars Technica <a href="https://arstechnica.com/ai/2025/01/microsoft-embraces-openai-competitor-deepseek-on-its-ai-hosting-service/">article</a> - Microsoft now hosts AI model accused of copying OpenAI data</li><li>The Signal <a href="https://signalscv.com/2025/01/nvidia-loses-nearly-600-billion-in-deepseek-crash/">article</a> - Nvidia loses nearly $600 billion in DeepSeek crash</li><li>Yahoo Finance <a href="https://finance.yahoo.com/news/the-magnificent-7-stocks-are-having-their-worst-quarter-in-more-than-2-years-190335944.html">article</a> - The 'Magnificent 7' stocks are having their worst quarter in more than 2 years</li><li>Reuters <a href="https://www.reuters.com/technology/microsoft-pulls-back-more-data-center-leases-us-europe-analysts-say-2025-03-26/">article</a> - Microsoft pulls back from more data center leases in US and Europe, analysts say</li></ul><p><strong>US governance</strong></p><ul><li>National Law Review <a href="https://natlawreview.com/article/three-states-ban-deepseek-use-state-devices-and-networks">article</a> - Three States Ban DeepSeek Use on State Devices and Networks</li><li>CNN <a href="https://edition.cnn.com/2025/02/06/tech/deepseek-ai-us-ban-bill/index.html">article</a> - US lawmakers want to ban DeepSeek from government devices</li><li>House <a href="https://www.congress.gov/bill/119th-congress/house-bill/1121">bill</a> - No DeepSeek on Government Devices Act</li><li>Senate <a href="https://www.congress.gov/bill/119th-congress/senate-bill/321">bill</a> - Decoupling America's Artificial Intelligence Capabilities from China Act of 2025</li></ul><p><strong>Leaderboards</strong></p><ul><li><a href="https://aider.chat/docs/leaderboards/">aider</a></li><li><a href="https://livebench.ai/#/">LiveBench</a></li><li><a href="https://huggingface.co/spaces/lmarena-ai/chatbot-arena-leaderboard">LM Arena</a></li><li><a href="https://www.kaggle.com/competitions/konwinski-prize/leaderboard">Konwinski Prize</a></li><li><a href="https://arxiv.org/abs/2410.06992">Preprint</a> - SWE-Bench+: Enhanced Coding Benchmark for LLMs</li><li>Cybernews <a href="https://cybernews.com/ai-news/openai-measures-model-engineering-benchmarks-real-upwork-tasks/">article</a> - OpenAI study proves LLMs still behind human engineers in over 1400 real-world tasks</li></ul><p><strong>Other References</strong></p><ul><li>Anthropic <a href="https://www.anthropic.com/news/the-anthropic-economic-index">report</a> - The Anthropic Economic Index</li><li>METR <a href="https://arxiv.org/abs/2503.14499">Report</a> - Measuring AI Ability to Complete Long Tasks</li><li>The Information <a href="https://www.theinformation.com/articles/openai-discusses-building-first-data-center-storage">article</a> - OpenAI Discusses Building Its First Data Center for Storage<ul><li>Deepmind <a href="https://arxiv.org/abs/2112.04426">report</a> backing up this idea</li></ul></li><li>TechCrunch <a href="https://techcrunch.com/2025/03/26/openai-adopts-rival-anthropics-standard-for-connecting-ai-models-to-data/">article</a> - OpenAI adopts rival Anthropic's standard for connecting AI models to data</li><li>Reuters <a href="https://www.reuters.com/technology/artificial-intelligence/openai-meta-talks-with-reliance-ai-partnerships-information-reports-2025-03-22/">article</a> - OpenAI, Meta in talks with Reliance for AI partnerships, The Information reports</li><li>2024 AI Index <a href="https://hai.stanford.edu/ai-index/2024-ai-index-report">report</a></li><li>NDTV <a href="https://www.ndtv.com/world-news/ghibli-style-images-to-memes-white-house-embraces-alt-right-online-culture-8066199">article</a> - Ghibli-Style Images To Memes: White House Embraces Alt-Right Online Culture</li><li>Elk <a href="https://elk.zone/carhenge.club/@skiles/114203147063483693">post</a> on DOGE and AI</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 09 Apr 2025 09:10:11 -0600</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/b3bf778f/0f502957.mp3" length="87927812" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>5491</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>DeepSeek has been out for over 2 months now, and things have begun to settle down. We take this opportunity to contextualize the developments that have occurred in its wake, both within the AI industry and the world economy. As systems get more "agentic" and users are willing to spend increasing amounts of time waiting for their outputs, the value of supposed "reasoning" models continues to be peddled by AI system developers, but does the data really back these claims?</p><p>Check out our DeepSeek <a href="https://kairos.fm/muckraikers/b012/">minisode</a> for a snappier overview!</p><p>EPISODE RECORDED 2025.03.30</p><p><br></p><ul><li>(00:40) - DeepSeek R1 recap</li>
<li>(02:46) - What makes it new?</li>
<li>(08:53) - What is reasoning?</li>
<li>(14:51) - Limitations of reasoning models (why we hate reasoning)</li>
<li>(31:16) - Claims about R1 training on Open AI</li>
<li>(37:30) - “Deep Research”</li>
<li>(49:13) - Developments and drama in the AI industry</li>
<li>(56:26) - Proposed economic value</li>
<li>(01:14:20) - US government involvement</li>
<li>(01:23:28) - OpenAI uses MCP</li>
<li>(01:28:15) - Outro</li>
</ul><br><strong><br>Links</strong><ul><li>DeepSeek <a href="https://www.deepseek.com/">website</a></li><li>DeepSeek <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf">paper</a></li><li>DeepSeek <a href="https://api-docs.deepseek.com/quick_start/pricing">docs</a> - Models and Pricing</li><li>DeepSeek <a href="https://github.com/deepseek-ai/3FS">repo</a> - 3FS</li></ul><p><strong>Understanding DeepSeek/DeepResearch</strong></p><ul><li>Explainers<ul><li>Language Models &amp; Co. <a href="https://newsletter.languagemodels.co/p/the-illustrated-deepseek-r1">article</a> - The Illustrated DeepSeek-R1</li><li>Towards Data Science <a href="https://towardsdatascience.com/deepseek-v3-explained-1-multi-head-latent-attention-ed6bee2a67c4/">article</a> - DeepSeek-V3 Explained 1: Multi-head Latent Attention</li><li>Jina.ai <a href="https://jina.ai/news/a-practical-guide-to-implementing-deepsearch-deepresearch/">article</a> - A Practical Guide to Implementing DeepSearch/DeepResearch</li><li>Han, Not Solo <a href="https://leehanchung.github.io/blogs/2025/02/26/deep-research/">blogpost</a> - The Differences between Deep Research, Deep Research, and Deep Research</li></ul></li><li>Analysis and Research<ul><li><a href="https://arxiv.org/abs/2503.20783">Preprint</a> - Understanding R1-Zero-Like Training: A Critical Perspective</li><li><a href="https://oatllm.notion.site/oat-zero">Blogpost</a> - There May Not be Aha Moment in R1-Zero-like Training — A Pilot Study</li><li><a href="https://arxiv.org/abs/2407.21787">Preprint</a> - Large Language Monkeys: Scaling Inference Compute with Repeated Sampling</li><li><a href="https://arxiv.org/abs/2503.08679">Preprint</a> - Chain-of-Thought Reasoning In The Wild Is Not Always Faithful</li></ul></li></ul><p><strong>Fallout coverage</strong></p><ul><li>TechCrunch <a href="https://techcrunch.com/2025/03/13/openai-calls-deepseek-state-controlled-calls-for-bans-on-prc-produced-models/">article</a> - OpenAI calls DeepSeek 'state-controlled,' calls for bans on 'PRC-produced' models</li><li>The Verge <a href="https://www.theverge.com/news/601195/openai-evidence-deepseek-distillation-ai-data">article</a> - OpenAI has evidence that its models helped train China’s DeepSeek</li><li>Interesting Engineer <a href="https://interestingengineering.com/culture/deepseeks-ai-training-cost-billion?group=test_a">article</a> - $6M myth: DeepSeek’s true AI cost is 216x higher at $1.3B, research reveals</li><li>Ars Technica <a href="https://arstechnica.com/ai/2025/01/microsoft-embraces-openai-competitor-deepseek-on-its-ai-hosting-service/">article</a> - Microsoft now hosts AI model accused of copying OpenAI data</li><li>The Signal <a href="https://signalscv.com/2025/01/nvidia-loses-nearly-600-billion-in-deepseek-crash/">article</a> - Nvidia loses nearly $600 billion in DeepSeek crash</li><li>Yahoo Finance <a href="https://finance.yahoo.com/news/the-magnificent-7-stocks-are-having-their-worst-quarter-in-more-than-2-years-190335944.html">article</a> - The 'Magnificent 7' stocks are having their worst quarter in more than 2 years</li><li>Reuters <a href="https://www.reuters.com/technology/microsoft-pulls-back-more-data-center-leases-us-europe-analysts-say-2025-03-26/">article</a> - Microsoft pulls back from more data center leases in US and Europe, analysts say</li></ul><p><strong>US governance</strong></p><ul><li>National Law Review <a href="https://natlawreview.com/article/three-states-ban-deepseek-use-state-devices-and-networks">article</a> - Three States Ban DeepSeek Use on State Devices and Networks</li><li>CNN <a href="https://edition.cnn.com/2025/02/06/tech/deepseek-ai-us-ban-bill/index.html">article</a> - US lawmakers want to ban DeepSeek from government devices</li><li>House <a href="https://www.congress.gov/bill/119th-congress/house-bill/1121">bill</a> - No DeepSeek on Government Devices Act</li><li>Senate <a href="https://www.congress.gov/bill/119th-congress/senate-bill/321">bill</a> - Decoupling America's Artificial Intelligence Capabilities from China Act of 2025</li></ul><p><strong>Leaderboards</strong></p><ul><li><a href="https://aider.chat/docs/leaderboards/">aider</a></li><li><a href="https://livebench.ai/#/">LiveBench</a></li><li><a href="https://huggingface.co/spaces/lmarena-ai/chatbot-arena-leaderboard">LM Arena</a></li><li><a href="https://www.kaggle.com/competitions/konwinski-prize/leaderboard">Konwinski Prize</a></li><li><a href="https://arxiv.org/abs/2410.06992">Preprint</a> - SWE-Bench+: Enhanced Coding Benchmark for LLMs</li><li>Cybernews <a href="https://cybernews.com/ai-news/openai-measures-model-engineering-benchmarks-real-upwork-tasks/">article</a> - OpenAI study proves LLMs still behind human engineers in over 1400 real-world tasks</li></ul><p><strong>Other References</strong></p><ul><li>Anthropic <a href="https://www.anthropic.com/news/the-anthropic-economic-index">report</a> - The Anthropic Economic Index</li><li>METR <a href="https://arxiv.org/abs/2503.14499">Report</a> - Measuring AI Ability to Complete Long Tasks</li><li>The Information <a href="https://www.theinformation.com/articles/openai-discusses-building-first-data-center-storage">article</a> - OpenAI Discusses Building Its First Data Center for Storage<ul><li>Deepmind <a href="https://arxiv.org/abs/2112.04426">report</a> backing up this idea</li></ul></li><li>TechCrunch <a href="https://techcrunch.com/2025/03/26/openai-adopts-rival-anthropics-standard-for-connecting-ai-models-to-data/">article</a> - OpenAI adopts rival Anthropic's standard for connecting AI models to data</li><li>Reuters <a href="https://www.reuters.com/technology/artificial-intelligence/openai-meta-talks-with-reliance-ai-partnerships-information-reports-2025-03-22/">article</a> - OpenAI, Meta in talks with Reliance for AI partnerships, The Information reports</li><li>2024 AI Index <a href="https://hai.stanford.edu/ai-index/2024-ai-index-report">report</a></li><li>NDTV <a href="https://www.ndtv.com/world-news/ghibli-style-images-to-memes-white-house-embraces-alt-right-online-culture-8066199">article</a> - Ghibli-Style Images To Memes: White House Embraces Alt-Right Online Culture</li><li>Elk <a href="https://elk.zone/carhenge.club/@skiles/114203147063483693">post</a> on DOGE and AI</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, DeepSeek, reasoning, reasoning models, agentic ai, DeepResearch, R1</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:person role="Editor" href="https://muckraikers.transistor.fm/people/chase-precopia">Chase Precopia</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/b3bf778f/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>DeepSeek Minisode</title>
      <itunes:title>DeepSeek Minisode</itunes:title>
      <itunes:episodeType>bonus</itunes:episodeType>
      <guid isPermaLink="false">34a56ea4-bbb5-401f-a69a-63f0cec07a6e</guid>
      <link>https://kairos.fm/muckraikers/b012</link>
      <description>
        <![CDATA[<p>DeepSeek R1 has taken the world by storm, causing a stock market crash and prompting further calls for export controls within the US. Since this story is still very much in development, with follow-up investigations and calls for governance being released almost daily, we thought it best to hold of for a little while longer to be able to tell the whole story. Nonetheless, it's a big story, so we provide a brief overview of all that's out there so far.</p><p></p><ul><li>(00:00) - Recording date</li>
<li>(00:04) - Intro</li>
<li>(00:37) - DeepSeek drop and reactions</li>
<li>(04:27) - Export controls</li>
<li>(08:05) - Skepticism and uncertainty</li>
<li>(14:12) - Outro</li>
</ul><br><strong><br>Links</strong><ul><li>DeepSeek <a href="https://www.deepseek.com/">website</a></li><li>DeepSeek <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf">paper</a></li><li>Reuters <a href="https://www.reuters.com/technology/artificial-intelligence/what-is-deepseek-why-is-it-disrupting-ai-sector-2025-01-27/">article</a> - What is DeepSeek and why is it disrupting the AI sector?</li></ul><p><strong>Fallout coverage</strong></p><ul><li>The Verge <a href="https://www.theverge.com/news/601195/openai-evidence-deepseek-distillation-ai-data">article</a> - OpenAI has evidence that its models helped train China’s DeepSeek</li><li>The Signal <a href="https://signalscv.com/2025/01/nvidia-loses-nearly-600-billion-in-deepseek-crash/">article</a> - Nvidia loses nearly $600 billion in DeepSeek crash</li><li>CNN <a href="https://edition.cnn.com/2025/02/06/tech/deepseek-ai-us-ban-bill/index.html">article</a> - US lawmakers want to ban DeepSeek from government devices</li><li>Fortune <a href="https://fortune.com/2025/01/27/mark-zuckerberg-meta-llama-assembling-war-rooms-engineers-deepseek-ai-china/">article</a> - Meta is reportedly scrambling ‘war rooms’ of engineers to figure out how DeepSeek’s AI is beating everyone else at a fraction of the price</li><li>Dario Amodei's <a href="https://darioamodei.com/on-deepseek-and-export-controls">blogpost</a> - On DeepSeek and Export Controls</li><li>SemiAnalysis <a href="https://semianalysis.com/2025/01/31/deepseek-debates/">article</a> - DeepSeek Debates</li><li>Ars Technica <a href="https://arstechnica.com/ai/2025/01/microsoft-embraces-openai-competitor-deepseek-on-its-ai-hosting-service/">article</a> - Microsoft now hosts AI model accused of copying OpenAI data</li><li>Wiz <a href="https://www.wiz.io/blog/wiz-research-uncovers-exposed-deepseek-database-leak">Blogpost</a> - Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History</li></ul><p><strong>Investigations into "reasoning"</strong></p><ul><li><a href="https://oatllm.notion.site/oat-zero">Blogpost</a> - There May Not be Aha Moment in R1-Zero-like Training — A Pilot Study</li><li><a href="https://arxiv.org/abs/2501.19393">Preprint</a> - s1: Simple test-time scaling</li><li><a href="https://arxiv.org/abs/2502.03387">Preprint</a> - LIMO: Less is More for Reasoning</li><li><a href="https://diffuse.one/p/d1-007">Blogpost</a> - Reasoning Reflections</li><li><a href="https://arxiv.org/abs/2501.18576">Preprint</a> - Token-Hungry, Yet Precise: DeepSeek R1 Highlights the Need for Multi-Step Reasoning Over Speed in MATH</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>DeepSeek R1 has taken the world by storm, causing a stock market crash and prompting further calls for export controls within the US. Since this story is still very much in development, with follow-up investigations and calls for governance being released almost daily, we thought it best to hold of for a little while longer to be able to tell the whole story. Nonetheless, it's a big story, so we provide a brief overview of all that's out there so far.</p><p></p><ul><li>(00:00) - Recording date</li>
<li>(00:04) - Intro</li>
<li>(00:37) - DeepSeek drop and reactions</li>
<li>(04:27) - Export controls</li>
<li>(08:05) - Skepticism and uncertainty</li>
<li>(14:12) - Outro</li>
</ul><br><strong><br>Links</strong><ul><li>DeepSeek <a href="https://www.deepseek.com/">website</a></li><li>DeepSeek <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf">paper</a></li><li>Reuters <a href="https://www.reuters.com/technology/artificial-intelligence/what-is-deepseek-why-is-it-disrupting-ai-sector-2025-01-27/">article</a> - What is DeepSeek and why is it disrupting the AI sector?</li></ul><p><strong>Fallout coverage</strong></p><ul><li>The Verge <a href="https://www.theverge.com/news/601195/openai-evidence-deepseek-distillation-ai-data">article</a> - OpenAI has evidence that its models helped train China’s DeepSeek</li><li>The Signal <a href="https://signalscv.com/2025/01/nvidia-loses-nearly-600-billion-in-deepseek-crash/">article</a> - Nvidia loses nearly $600 billion in DeepSeek crash</li><li>CNN <a href="https://edition.cnn.com/2025/02/06/tech/deepseek-ai-us-ban-bill/index.html">article</a> - US lawmakers want to ban DeepSeek from government devices</li><li>Fortune <a href="https://fortune.com/2025/01/27/mark-zuckerberg-meta-llama-assembling-war-rooms-engineers-deepseek-ai-china/">article</a> - Meta is reportedly scrambling ‘war rooms’ of engineers to figure out how DeepSeek’s AI is beating everyone else at a fraction of the price</li><li>Dario Amodei's <a href="https://darioamodei.com/on-deepseek-and-export-controls">blogpost</a> - On DeepSeek and Export Controls</li><li>SemiAnalysis <a href="https://semianalysis.com/2025/01/31/deepseek-debates/">article</a> - DeepSeek Debates</li><li>Ars Technica <a href="https://arstechnica.com/ai/2025/01/microsoft-embraces-openai-competitor-deepseek-on-its-ai-hosting-service/">article</a> - Microsoft now hosts AI model accused of copying OpenAI data</li><li>Wiz <a href="https://www.wiz.io/blog/wiz-research-uncovers-exposed-deepseek-database-leak">Blogpost</a> - Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History</li></ul><p><strong>Investigations into "reasoning"</strong></p><ul><li><a href="https://oatllm.notion.site/oat-zero">Blogpost</a> - There May Not be Aha Moment in R1-Zero-like Training — A Pilot Study</li><li><a href="https://arxiv.org/abs/2501.19393">Preprint</a> - s1: Simple test-time scaling</li><li><a href="https://arxiv.org/abs/2502.03387">Preprint</a> - LIMO: Less is More for Reasoning</li><li><a href="https://diffuse.one/p/d1-007">Blogpost</a> - Reasoning Reflections</li><li><a href="https://arxiv.org/abs/2501.18576">Preprint</a> - Token-Hungry, Yet Precise: DeepSeek R1 Highlights the Need for Multi-Step Reasoning Over Speed in MATH</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 10 Feb 2025 10:00:00 -0700</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/5306c253/c3aab7b0.mp3" length="13697849" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>910</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>DeepSeek R1 has taken the world by storm, causing a stock market crash and prompting further calls for export controls within the US. Since this story is still very much in development, with follow-up investigations and calls for governance being released almost daily, we thought it best to hold of for a little while longer to be able to tell the whole story. Nonetheless, it's a big story, so we provide a brief overview of all that's out there so far.</p><p></p><ul><li>(00:00) - Recording date</li>
<li>(00:04) - Intro</li>
<li>(00:37) - DeepSeek drop and reactions</li>
<li>(04:27) - Export controls</li>
<li>(08:05) - Skepticism and uncertainty</li>
<li>(14:12) - Outro</li>
</ul><br><strong><br>Links</strong><ul><li>DeepSeek <a href="https://www.deepseek.com/">website</a></li><li>DeepSeek <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf">paper</a></li><li>Reuters <a href="https://www.reuters.com/technology/artificial-intelligence/what-is-deepseek-why-is-it-disrupting-ai-sector-2025-01-27/">article</a> - What is DeepSeek and why is it disrupting the AI sector?</li></ul><p><strong>Fallout coverage</strong></p><ul><li>The Verge <a href="https://www.theverge.com/news/601195/openai-evidence-deepseek-distillation-ai-data">article</a> - OpenAI has evidence that its models helped train China’s DeepSeek</li><li>The Signal <a href="https://signalscv.com/2025/01/nvidia-loses-nearly-600-billion-in-deepseek-crash/">article</a> - Nvidia loses nearly $600 billion in DeepSeek crash</li><li>CNN <a href="https://edition.cnn.com/2025/02/06/tech/deepseek-ai-us-ban-bill/index.html">article</a> - US lawmakers want to ban DeepSeek from government devices</li><li>Fortune <a href="https://fortune.com/2025/01/27/mark-zuckerberg-meta-llama-assembling-war-rooms-engineers-deepseek-ai-china/">article</a> - Meta is reportedly scrambling ‘war rooms’ of engineers to figure out how DeepSeek’s AI is beating everyone else at a fraction of the price</li><li>Dario Amodei's <a href="https://darioamodei.com/on-deepseek-and-export-controls">blogpost</a> - On DeepSeek and Export Controls</li><li>SemiAnalysis <a href="https://semianalysis.com/2025/01/31/deepseek-debates/">article</a> - DeepSeek Debates</li><li>Ars Technica <a href="https://arstechnica.com/ai/2025/01/microsoft-embraces-openai-competitor-deepseek-on-its-ai-hosting-service/">article</a> - Microsoft now hosts AI model accused of copying OpenAI data</li><li>Wiz <a href="https://www.wiz.io/blog/wiz-research-uncovers-exposed-deepseek-database-leak">Blogpost</a> - Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History</li></ul><p><strong>Investigations into "reasoning"</strong></p><ul><li><a href="https://oatllm.notion.site/oat-zero">Blogpost</a> - There May Not be Aha Moment in R1-Zero-like Training — A Pilot Study</li><li><a href="https://arxiv.org/abs/2501.19393">Preprint</a> - s1: Simple test-time scaling</li><li><a href="https://arxiv.org/abs/2502.03387">Preprint</a> - LIMO: Less is More for Reasoning</li><li><a href="https://diffuse.one/p/d1-007">Blogpost</a> - Reasoning Reflections</li><li><a href="https://arxiv.org/abs/2501.18576">Preprint</a> - Token-Hungry, Yet Precise: DeepSeek R1 Highlights the Need for Multi-Step Reasoning Over Speed in MATH</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, DeepSeek, DeepSeek R1</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/5306c253/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Understanding AI World Models w/ Chris Canal</title>
      <itunes:episode>11</itunes:episode>
      <podcast:episode>11</podcast:episode>
      <itunes:title>Understanding AI World Models w/ Chris Canal</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">61751f48-eab9-4caa-b07f-edd3f92c8b99</guid>
      <link>https://kairos.fm/muckraikers/e011/</link>
      <description>
        <![CDATA[<p>Chris Canal, co-founder of EquiStamp, joins muckrAIkers as our first ever podcast guest! In this ~3.5 hour interview, we discuss intelligence vs. competencies, the importance of test-time compute, moving goalposts, the orthogonality thesis, and much more.</p><p>A seasoned software developer, Chris started EquiStamp as a way to improve our current understanding of model failure modes and capabilities in late 2023. Now a key contractor for METR, EquiStamp evaluates the next generation of LLMs from frontier model developers like OpenAI and Anthropic.</p><p><a href="https://www.equistamp.com/">EquiStamp</a> is hiring, so if you're a software developer interested in a fully remote opportunity with flexible working hours, join the EquiStamp <a href="https://discord.com/invite/tjKApmzndk">Discord server</a> and message Chris directly; oh, and let him know muckrAIkers sent you!</p><p><br></p><ul><li>(00:00) - Recording date</li>
<li>(00:05) - Intro</li>
<li>(00:29) - Hot off the press</li>
<li>(02:17) - Introducing Chris Canal</li>
<li>(19:12) - World/risk models</li>
<li>(35:21) - Competencies + decision making power</li>
<li>(42:09) - Breaking models down</li>
<li>(01:05:06) - Timelines, test time compute</li>
<li>(01:19:17) - Moving goalposts</li>
<li>(01:26:34) - Risk management pre-AGI</li>
<li>(01:46:32) - Happy endings</li>
<li>(01:55:50) - Causal chains</li>
<li>(02:04:49) - Appetite for democracy</li>
<li>(02:20:06) - Tech-frame based fallacies</li>
<li>(02:39:56) - Bringing back real capitalism</li>
<li>(02:45:23) - Orthogonality Thesis  </li>
<li>(03:04:31) - Why we do this</li>
<li>(03:15:36) - Equistamp!</li>
</ul><p><strong><br>Links</strong></p><ul><li><a href="https://www.equistamp.com/">EquiStamp</a></li><li>Chris's <a href="https://x.com/chriscanal4">Twitter</a></li><li>METR <a href="https://metr.org/AI_R_D_Evaluation_Report.pdf">Paper</a> - RE-Bench: Evaluating frontier AI R&amp;D capabilities of language model agents against human experts</li><li>All Trades <a href="https://alltrades.substack.com/p/learning-from-history-preventing">article</a> - Learning from History: Preventing AGI Existential Risks through Policy by Chris Canal</li><li>Better Systems <a href="https://chriscanal.substack.com/p/the-omega-protocol-another-manhattan?r=2ldxa&amp;utm_campaign=post&amp;utm_medium=web&amp;triedRedirect=true">article</a> - The Omega Protocol: Another Manhattan Project</li></ul><p><strong>Superintelligence &amp; Commentary</strong></p><ul><li>Wikipedia <a href="https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies">article</a> - Superintelligence: Paths, Dangers, Strategies by Nick Bostrom</li><li>Reflective Altruism <a href="https://reflectivealtruism.com/2024/05/30/against-the-singularity-hypothesis-part-5-bostrom-on-the-singularity/">article</a> - Against the singularity hypothesis (Part 5: Bostrom on the singularity)</li><li>Into AI Safety <a href="https://kairos.fm/intoaisafety/e019/">Interview</a> - Scaling Democracy w/ Dr. Igor Krawczuk</li></ul><p><strong>Referenced Sources</strong></p><ul><li><a href="https://link.springer.com/book/10.1007/978-3-319-24301-6">Book</a> - Man-made Catastrophes and Risk Information Concealment: Case Studies of Major Disasters and Human Fallibility</li><li>Artificial Intelligence <a href="https://www.sciencedirect.com/science/article/pii/S0004370221000862">Paper</a> - Reward is Enough</li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/Capital_and_Ideology">article</a> - Capital and Ideology by Thomas Piketty</li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/Pantheon_(TV_series)">article</a> - Pantheon</li></ul><p><strong>LeCun on AGI</strong></p><ul><li>"Won't Happen" - Time <a href="https://old.reddit.com/r/singularity/comments/1hp7t2i/yann_lecun_doubles_down_that_agi_wont_happen_in/">article</a> - Meta’s AI Chief Yann LeCun on AGI, Open-Source, and AI Risk</li><li>"But if it does, it'll be my research agenda latent state models, which I happen to research" - Meta Platforms <a href="https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/">Blogpost</a> - I-JEPA: The first AI model based on Yann LeCun’s vision for more human-like AI</li></ul><p><strong>Other Sources</strong></p><ul><li>Stanford CS <a href="https://cs191w.stanford.edu/projects/Gu,%20Chenchen_CS191W.pdf">Senior Project</a> - Timing Attacks on Prompt Caching in Language Model APIs</li><li>TechCrunch <a href="https://techcrunch.com/2025/01/15/ai-researcher-francois-chollet-founds-a-new-ai-lab-focused-on-agi/">article</a> - AI researcher François Chollet founds a new AI lab focused on AGI</li><li>White House <a href="https://www.whitehouse.gov/briefing-room/statements-releases/2025/01/13/fact-sheet-ensuring-u-s-security-and-economic-strength-in-the-age-of-artificial-intelligence/">Fact Sheet</a> - Ensuring U.S. Security and Economic Strength in the Age of Artificial Intelligence</li><li>New York Post <a href="https://nypost.com/2025/01/15/business/lawyer-drops-meta-over-ceo-mark-zuckerbergs-neo-nazi-madness/">article</a> - Bay Area lawyer drops Meta as client over CEO Mark Zuckerberg’s ‘toxic masculinity and Neo-Nazi madness’</li><li>OpenEdition <a href="https://journals.openedition.org/oeconomia/10580">Academic Review</a> of Thomas Piketty</li><li>Neural Processing Letters <a href="https://link.springer.com/article/10.1007/s11063-021-10562-2">Paper</a> - A Survey of Encoding Techniques for Signal Processing in Spiking Neural Networks</li><li>BFI <a href="https://bfi.uchicago.edu/working-paper/do-financial-concerns-make-workers-less-productive">Working Paper</a> - Do Financial Concerns Make Workers Less Productive?</li><li>No Mercy/No Malice <a href="https://www.profgalloway.com/how-to-survive-the-next-four-years/">article</a> - How to Survive the Next Four Years by Scott Galloway</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Chris Canal, co-founder of EquiStamp, joins muckrAIkers as our first ever podcast guest! In this ~3.5 hour interview, we discuss intelligence vs. competencies, the importance of test-time compute, moving goalposts, the orthogonality thesis, and much more.</p><p>A seasoned software developer, Chris started EquiStamp as a way to improve our current understanding of model failure modes and capabilities in late 2023. Now a key contractor for METR, EquiStamp evaluates the next generation of LLMs from frontier model developers like OpenAI and Anthropic.</p><p><a href="https://www.equistamp.com/">EquiStamp</a> is hiring, so if you're a software developer interested in a fully remote opportunity with flexible working hours, join the EquiStamp <a href="https://discord.com/invite/tjKApmzndk">Discord server</a> and message Chris directly; oh, and let him know muckrAIkers sent you!</p><p><br></p><ul><li>(00:00) - Recording date</li>
<li>(00:05) - Intro</li>
<li>(00:29) - Hot off the press</li>
<li>(02:17) - Introducing Chris Canal</li>
<li>(19:12) - World/risk models</li>
<li>(35:21) - Competencies + decision making power</li>
<li>(42:09) - Breaking models down</li>
<li>(01:05:06) - Timelines, test time compute</li>
<li>(01:19:17) - Moving goalposts</li>
<li>(01:26:34) - Risk management pre-AGI</li>
<li>(01:46:32) - Happy endings</li>
<li>(01:55:50) - Causal chains</li>
<li>(02:04:49) - Appetite for democracy</li>
<li>(02:20:06) - Tech-frame based fallacies</li>
<li>(02:39:56) - Bringing back real capitalism</li>
<li>(02:45:23) - Orthogonality Thesis  </li>
<li>(03:04:31) - Why we do this</li>
<li>(03:15:36) - Equistamp!</li>
</ul><p><strong><br>Links</strong></p><ul><li><a href="https://www.equistamp.com/">EquiStamp</a></li><li>Chris's <a href="https://x.com/chriscanal4">Twitter</a></li><li>METR <a href="https://metr.org/AI_R_D_Evaluation_Report.pdf">Paper</a> - RE-Bench: Evaluating frontier AI R&amp;D capabilities of language model agents against human experts</li><li>All Trades <a href="https://alltrades.substack.com/p/learning-from-history-preventing">article</a> - Learning from History: Preventing AGI Existential Risks through Policy by Chris Canal</li><li>Better Systems <a href="https://chriscanal.substack.com/p/the-omega-protocol-another-manhattan?r=2ldxa&amp;utm_campaign=post&amp;utm_medium=web&amp;triedRedirect=true">article</a> - The Omega Protocol: Another Manhattan Project</li></ul><p><strong>Superintelligence &amp; Commentary</strong></p><ul><li>Wikipedia <a href="https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies">article</a> - Superintelligence: Paths, Dangers, Strategies by Nick Bostrom</li><li>Reflective Altruism <a href="https://reflectivealtruism.com/2024/05/30/against-the-singularity-hypothesis-part-5-bostrom-on-the-singularity/">article</a> - Against the singularity hypothesis (Part 5: Bostrom on the singularity)</li><li>Into AI Safety <a href="https://kairos.fm/intoaisafety/e019/">Interview</a> - Scaling Democracy w/ Dr. Igor Krawczuk</li></ul><p><strong>Referenced Sources</strong></p><ul><li><a href="https://link.springer.com/book/10.1007/978-3-319-24301-6">Book</a> - Man-made Catastrophes and Risk Information Concealment: Case Studies of Major Disasters and Human Fallibility</li><li>Artificial Intelligence <a href="https://www.sciencedirect.com/science/article/pii/S0004370221000862">Paper</a> - Reward is Enough</li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/Capital_and_Ideology">article</a> - Capital and Ideology by Thomas Piketty</li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/Pantheon_(TV_series)">article</a> - Pantheon</li></ul><p><strong>LeCun on AGI</strong></p><ul><li>"Won't Happen" - Time <a href="https://old.reddit.com/r/singularity/comments/1hp7t2i/yann_lecun_doubles_down_that_agi_wont_happen_in/">article</a> - Meta’s AI Chief Yann LeCun on AGI, Open-Source, and AI Risk</li><li>"But if it does, it'll be my research agenda latent state models, which I happen to research" - Meta Platforms <a href="https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/">Blogpost</a> - I-JEPA: The first AI model based on Yann LeCun’s vision for more human-like AI</li></ul><p><strong>Other Sources</strong></p><ul><li>Stanford CS <a href="https://cs191w.stanford.edu/projects/Gu,%20Chenchen_CS191W.pdf">Senior Project</a> - Timing Attacks on Prompt Caching in Language Model APIs</li><li>TechCrunch <a href="https://techcrunch.com/2025/01/15/ai-researcher-francois-chollet-founds-a-new-ai-lab-focused-on-agi/">article</a> - AI researcher François Chollet founds a new AI lab focused on AGI</li><li>White House <a href="https://www.whitehouse.gov/briefing-room/statements-releases/2025/01/13/fact-sheet-ensuring-u-s-security-and-economic-strength-in-the-age-of-artificial-intelligence/">Fact Sheet</a> - Ensuring U.S. Security and Economic Strength in the Age of Artificial Intelligence</li><li>New York Post <a href="https://nypost.com/2025/01/15/business/lawyer-drops-meta-over-ceo-mark-zuckerbergs-neo-nazi-madness/">article</a> - Bay Area lawyer drops Meta as client over CEO Mark Zuckerberg’s ‘toxic masculinity and Neo-Nazi madness’</li><li>OpenEdition <a href="https://journals.openedition.org/oeconomia/10580">Academic Review</a> of Thomas Piketty</li><li>Neural Processing Letters <a href="https://link.springer.com/article/10.1007/s11063-021-10562-2">Paper</a> - A Survey of Encoding Techniques for Signal Processing in Spiking Neural Networks</li><li>BFI <a href="https://bfi.uchicago.edu/working-paper/do-financial-concerns-make-workers-less-productive">Working Paper</a> - Do Financial Concerns Make Workers Less Productive?</li><li>No Mercy/No Malice <a href="https://www.profgalloway.com/how-to-survive-the-next-four-years/">article</a> - How to Survive the Next Four Years by Scott Galloway</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 27 Jan 2025 08:00:00 -0700</pubDate>
      <author>Jacob Haimes, Igor Krawczuk, and Chris Canal</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/ca6e9de7/411a72bc.mp3" length="149352141" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes, Igor Krawczuk, and Chris Canal</itunes:author>
      <itunes:duration>11985</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Chris Canal, co-founder of EquiStamp, joins muckrAIkers as our first ever podcast guest! In this ~3.5 hour interview, we discuss intelligence vs. competencies, the importance of test-time compute, moving goalposts, the orthogonality thesis, and much more.</p><p>A seasoned software developer, Chris started EquiStamp as a way to improve our current understanding of model failure modes and capabilities in late 2023. Now a key contractor for METR, EquiStamp evaluates the next generation of LLMs from frontier model developers like OpenAI and Anthropic.</p><p><a href="https://www.equistamp.com/">EquiStamp</a> is hiring, so if you're a software developer interested in a fully remote opportunity with flexible working hours, join the EquiStamp <a href="https://discord.com/invite/tjKApmzndk">Discord server</a> and message Chris directly; oh, and let him know muckrAIkers sent you!</p><p><br></p><ul><li>(00:00) - Recording date</li>
<li>(00:05) - Intro</li>
<li>(00:29) - Hot off the press</li>
<li>(02:17) - Introducing Chris Canal</li>
<li>(19:12) - World/risk models</li>
<li>(35:21) - Competencies + decision making power</li>
<li>(42:09) - Breaking models down</li>
<li>(01:05:06) - Timelines, test time compute</li>
<li>(01:19:17) - Moving goalposts</li>
<li>(01:26:34) - Risk management pre-AGI</li>
<li>(01:46:32) - Happy endings</li>
<li>(01:55:50) - Causal chains</li>
<li>(02:04:49) - Appetite for democracy</li>
<li>(02:20:06) - Tech-frame based fallacies</li>
<li>(02:39:56) - Bringing back real capitalism</li>
<li>(02:45:23) - Orthogonality Thesis  </li>
<li>(03:04:31) - Why we do this</li>
<li>(03:15:36) - Equistamp!</li>
</ul><p><strong><br>Links</strong></p><ul><li><a href="https://www.equistamp.com/">EquiStamp</a></li><li>Chris's <a href="https://x.com/chriscanal4">Twitter</a></li><li>METR <a href="https://metr.org/AI_R_D_Evaluation_Report.pdf">Paper</a> - RE-Bench: Evaluating frontier AI R&amp;D capabilities of language model agents against human experts</li><li>All Trades <a href="https://alltrades.substack.com/p/learning-from-history-preventing">article</a> - Learning from History: Preventing AGI Existential Risks through Policy by Chris Canal</li><li>Better Systems <a href="https://chriscanal.substack.com/p/the-omega-protocol-another-manhattan?r=2ldxa&amp;utm_campaign=post&amp;utm_medium=web&amp;triedRedirect=true">article</a> - The Omega Protocol: Another Manhattan Project</li></ul><p><strong>Superintelligence &amp; Commentary</strong></p><ul><li>Wikipedia <a href="https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies">article</a> - Superintelligence: Paths, Dangers, Strategies by Nick Bostrom</li><li>Reflective Altruism <a href="https://reflectivealtruism.com/2024/05/30/against-the-singularity-hypothesis-part-5-bostrom-on-the-singularity/">article</a> - Against the singularity hypothesis (Part 5: Bostrom on the singularity)</li><li>Into AI Safety <a href="https://kairos.fm/intoaisafety/e019/">Interview</a> - Scaling Democracy w/ Dr. Igor Krawczuk</li></ul><p><strong>Referenced Sources</strong></p><ul><li><a href="https://link.springer.com/book/10.1007/978-3-319-24301-6">Book</a> - Man-made Catastrophes and Risk Information Concealment: Case Studies of Major Disasters and Human Fallibility</li><li>Artificial Intelligence <a href="https://www.sciencedirect.com/science/article/pii/S0004370221000862">Paper</a> - Reward is Enough</li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/Capital_and_Ideology">article</a> - Capital and Ideology by Thomas Piketty</li><li>Wikipedia <a href="https://en.wikipedia.org/wiki/Pantheon_(TV_series)">article</a> - Pantheon</li></ul><p><strong>LeCun on AGI</strong></p><ul><li>"Won't Happen" - Time <a href="https://old.reddit.com/r/singularity/comments/1hp7t2i/yann_lecun_doubles_down_that_agi_wont_happen_in/">article</a> - Meta’s AI Chief Yann LeCun on AGI, Open-Source, and AI Risk</li><li>"But if it does, it'll be my research agenda latent state models, which I happen to research" - Meta Platforms <a href="https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/">Blogpost</a> - I-JEPA: The first AI model based on Yann LeCun’s vision for more human-like AI</li></ul><p><strong>Other Sources</strong></p><ul><li>Stanford CS <a href="https://cs191w.stanford.edu/projects/Gu,%20Chenchen_CS191W.pdf">Senior Project</a> - Timing Attacks on Prompt Caching in Language Model APIs</li><li>TechCrunch <a href="https://techcrunch.com/2025/01/15/ai-researcher-francois-chollet-founds-a-new-ai-lab-focused-on-agi/">article</a> - AI researcher François Chollet founds a new AI lab focused on AGI</li><li>White House <a href="https://www.whitehouse.gov/briefing-room/statements-releases/2025/01/13/fact-sheet-ensuring-u-s-security-and-economic-strength-in-the-age-of-artificial-intelligence/">Fact Sheet</a> - Ensuring U.S. Security and Economic Strength in the Age of Artificial Intelligence</li><li>New York Post <a href="https://nypost.com/2025/01/15/business/lawyer-drops-meta-over-ceo-mark-zuckerbergs-neo-nazi-madness/">article</a> - Bay Area lawyer drops Meta as client over CEO Mark Zuckerberg’s ‘toxic masculinity and Neo-Nazi madness’</li><li>OpenEdition <a href="https://journals.openedition.org/oeconomia/10580">Academic Review</a> of Thomas Piketty</li><li>Neural Processing Letters <a href="https://link.springer.com/article/10.1007/s11063-021-10562-2">Paper</a> - A Survey of Encoding Techniques for Signal Processing in Spiking Neural Networks</li><li>BFI <a href="https://bfi.uchicago.edu/working-paper/do-financial-concerns-make-workers-less-productive">Working Paper</a> - Do Financial Concerns Make Workers Less Productive?</li><li>No Mercy/No Malice <a href="https://www.profgalloway.com/how-to-survive-the-next-four-years/">article</a> - How to Survive the Next Four Years by Scott Galloway</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, METR, evals, EquiStamp, Chris Canal, World Models, Risk Models, Superintelligence, AI Risks, Orthogonality Thesis</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/ca6e9de7/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>NeurIPS 2024 Wrapped 🌯</title>
      <itunes:episode>10</itunes:episode>
      <podcast:episode>10</podcast:episode>
      <itunes:title>NeurIPS 2024 Wrapped 🌯</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">323e4af8-4bf3-47f1-9cf8-ed8a719d39f3</guid>
      <link>https://kairos.fm/muckraikers/e010/</link>
      <description>
        <![CDATA[<p>What happens when you bring over 15,000 machine learning nerds to one city? If your guess didn't include racism, sabotage and scandal, belated epiphanies, a spicy SoLaR panel, and many fantastic research papers, you wouldn't have captured my experience. In this episode we discuss the drama and takeaways from NeurIPS 2024.</p><p><em>Posters available at time of episode preparation can be found on the episode </em><a href="https://kairos.fm/muckraikers/e010/"><em>webpage</em></a><em>.</em></p><p>EPISODE RECORDED 2024.12.22</p><p><br></p><ul><li>(00:00) - Recording date</li>
<li>(00:05) - Intro</li>
<li>(00:44) - Obligatory mentions</li>
<li>(01:54) - SoLaR panel</li>
<li>(18:43) - Test of Time</li>
<li>(24:17) - And now: science!</li>
<li>(28:53) - Downsides of benchmarks</li>
<li>(41:39) - Improving the science of ML</li>
<li>(53:07) - Performativity</li>
<li>(57:33) - NopenAI and Nanthropic</li>
<li>(01:09:35) - Fun/interesting papers</li>
<li>(01:13:12) - Initial takes on o3</li>
<li>(01:18:12) - WorkArena</li>
<li>(01:25:00) - Outro</li>
</ul><br><strong><br>Links</strong><p><em>Note: many workshop papers had not yet been published to arXiv as of preparing this episode, the OpenReview submission page is provided in these cases. </em></p><ul><li>NeurIPS <a href="https://neurips.cc/Conferences/2024/StatementOnInclusivity">statement</a> on inclusivity</li><li>CTOL Digital Solutions <a href="https://www.ctol.digital/news/neurips-2024-controversy-mit-professor-remarks-chinese-researchers-triumphs/">article</a> - NeurIPS 2024 Sparks Controversy: MIT Professor's Remarks Ignite "Racism" Backlash Amid Chinese Researchers’ Triumphs</li><li>(1/2) NeurIPS Best <a href="https://arxiv.org/abs/2404.02905">Paper</a> - Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction</li><li>Visual Autoregressive Model <a href="https://var-integrity-report.github.io/">report</a> <em>this link now provides a 404 error</em><ul><li>Don't worry, here it is on <a href="https://archive.is/5GklT">archive.is</a></li></ul></li><li>Reuters <a href="https://www.reuters.com/technology/artificial-intelligence/bytedance-seeks-11-mln-damages-intern-ai-breach-case-report-says-2024-11-28/">article</a> - ByteDance seeks $1.1 mln damages from intern in AI breach case, report says</li><li>CTOL Digital Solutions <a href="https://www.ctol.digital/news/ai-genius-neurips-win-bytedance-legal-battle/">article</a> - NeurIPS Award Winner Entangled in ByteDance's AI Sabotage Accusations: The Two Tales of an AI Genius</li><li>Reddit post on Ilya's <a href="https://www.reddit.com/r/singularity/comments/1hdrjvq/ilyas_full_talk_at_neurips_2024_pretraining_as_we/">talk</a></li><li>SoLaR workshop <a href="https://solar-neurips.github.io/">page</a></li></ul><p><strong>Referenced Sources</strong></p><ul><li>Harvard Data Science Review <a href="https://hdsr.mitpress.mit.edu/pub/g9mau4m0/release/2">article</a> - Data Science at the Singularity</li><li><a href="https://arxiv.org/abs/2204.10817">Paper</a> - Reward Reports for Reinforcement Learning</li><li><a href="https://arxiv.org/abs/2002.09398">Paper</a> - It's Not What Machines Can Learn, It's What We Cannot Teach</li><li><a href="https://arxiv.org/abs/2003.12206">Paper</a> - NeurIPS Reproducibility Program</li><li><a href="https://arxiv.org/abs/2003.08505">Paper</a> - A Metric Learning Reality Check</li></ul><p><strong>Improving Datasets, Benchmarks, and Measurements</strong></p><ul><li>Tutorial <a href="https://neurips.cc/virtual/2024/tutorial/99528">video</a> + <a href="https://neurips.cc/media/neurips-2024/Slides/99528_aXgzqdX.pdf">slides</a> - Experimental Design and Analysis for AI Researchers <em>(I think you need to have attended NeurIPS to access the recording, but I couldn't find a different version)</em></li><li><a href="https://betterbench.stanford.edu/">Paper</a> - BetterBench: Assessing AI Benchmarks, Uncovering Issues, and Establishing Best Practices</li><li><a href="https://www.safetywashing.ai/">Paper</a> - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?</li><li><a href="https://arxiv.org/abs/2411.00266">Paper</a> - A Systematic Review of NeurIPS Dataset Management Practices</li><li><a href="https://arxiv.org/abs/2410.22473">Paper</a> - The State of Data Curation at NeurIPS: An Assessment of Dataset Development Practices in the Datasets and Benchmarks Track</li><li><a href="https://arxiv.org/abs/2410.24100">Paper</a> - Benchmark Repositories for Better Benchmarking</li><li><a href="https://research.google/blog/croissant-a-metadata-format-for-ml-ready-datasets/">Paper</a> - Croissant: A Metadata Format for ML-Ready Datasets</li><li><a href="https://arxiv.org/abs/2406.09867">Paper</a> - Rethinking the Evaluation of Out-of-Distribution Detection: A Sorites Paradox</li><li><a href="https://arxiv.org/abs/2411.10939">Paper</a> - Evaluating Generative AI Systems is a Social Science Measurement Challenge</li><li><a href="https://arxiv.org/abs/2409.00844">Paper</a> - Report Cards: Qualitative Evaluation of LLMs</li></ul><p><strong>Governance Related</strong></p><ul><li><a href="https://arxiv.org/abs/2412.03824">Paper</a> - Towards Data Governance of Frontier AI Models</li><li><a href="https://openreview.net/forum?id=St6azqVuqs">Paper</a> - Ways Forward for Global AI Benefit Sharing</li><li><a href="https://arxiv.org/abs/2410.02230">Paper</a> - How do we warn downstream model providers of upstream risks?<ul><li>Unified Model Records <a href="https://modelrecord.com/">tool</a></li></ul></li><li><a href="https://openreview.net/forum?id=OeT2vCFqYY">Paper</a> - Policy Dreamer: Diverse Public Policy Creation via Elicitation and Simulation of Human Preferences</li><li><a href="https://arxiv.org/abs/2409.14055">Paper</a> - Monitoring Human Dependence on AI Systems with Reliance Drills</li><li><a href="https://arxiv.org/abs/2411.19211">Paper</a> - On the Ethical Considerations of Generative Agents</li><li><a href="https://openreview.net/forum?id=OvGYbqOEki">Paper</a> - GPAI Evaluation Standards Taskforce: Towards Effective AI Governance</li><li><a href="https://openreview.net/forum?id=EH6SmoChx9">Paper</a> - Levels of Autonomy: Liability in the age of AI Agents</li></ul><p><strong>Certified Bangers + Useful Tools</strong></p><ul><li><a href="https://arxiv.org/abs/2402.07712">Paper</a> - Model Collapse Demystified: The Case of Regression</li><li><a href="https://arxiv.org/abs/2405.19534">Paper</a> - Preference Learning Algorithms Do Not Learn Preference Rankings</li><li>LLM Dataset Inference <a href="https://arxiv.org/abs/2406.06443">paper</a> + <a href="https://github.com/pratyushmaini/llm_dataset_inference/">repo</a></li><li>dattri <a href="https://arxiv.org/abs/2410.04555">paper</a> + <a href="https://github.com/TRAIS-Lab/dattri">repo</a></li><li>DeTikZify <a href="https://arxiv.org/abs/2405.15306">paper</a> + <a href="https://github.com/potamides/DeTikZify">repo</a></li></ul><p><strong>Fun Benchmarks/Datasets</strong></p><ul><li>Paloma <a href="https://arxiv.org/abs/2312.10523">paper</a> + <a href="https://huggingface.co/datasets/allenai/paloma">dataset</a></li><li>RedPajama <a href="https://arxiv.org/abs/2411.12372">paper</a> + <a href="https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2">dataset</a></li><li>Assemblage <a href="https://assemblage-dataset.net/">webpage</a></li><li>WikiDBs <a href="https://wikidbs.github.io/">webpage</a></li><li>WhodunitBench <a href="https://github.com/jun0wanan/WhodunitBench-Murder_Mystery_Games">repo</a></li><li>ApeBench <a href="https://arxiv.org/abs/2411.00180">paper</a> + <a href="https://github.com/tum-pbs/apebench">repo</a></li><li>WorkArena++ <a href="https://arxiv.org/abs/2407.05291">paper</a></li></ul><p><strong>Other Sources</strong></p><ul><li><a href="https://arxiv.org/abs/2412.07066">Paper</a> - The Mirage of Artificial Intelligence Terms of Use Restrictions</li><li><a href="https://d..."></a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>What happens when you bring over 15,000 machine learning nerds to one city? If your guess didn't include racism, sabotage and scandal, belated epiphanies, a spicy SoLaR panel, and many fantastic research papers, you wouldn't have captured my experience. In this episode we discuss the drama and takeaways from NeurIPS 2024.</p><p><em>Posters available at time of episode preparation can be found on the episode </em><a href="https://kairos.fm/muckraikers/e010/"><em>webpage</em></a><em>.</em></p><p>EPISODE RECORDED 2024.12.22</p><p><br></p><ul><li>(00:00) - Recording date</li>
<li>(00:05) - Intro</li>
<li>(00:44) - Obligatory mentions</li>
<li>(01:54) - SoLaR panel</li>
<li>(18:43) - Test of Time</li>
<li>(24:17) - And now: science!</li>
<li>(28:53) - Downsides of benchmarks</li>
<li>(41:39) - Improving the science of ML</li>
<li>(53:07) - Performativity</li>
<li>(57:33) - NopenAI and Nanthropic</li>
<li>(01:09:35) - Fun/interesting papers</li>
<li>(01:13:12) - Initial takes on o3</li>
<li>(01:18:12) - WorkArena</li>
<li>(01:25:00) - Outro</li>
</ul><br><strong><br>Links</strong><p><em>Note: many workshop papers had not yet been published to arXiv as of preparing this episode, the OpenReview submission page is provided in these cases. </em></p><ul><li>NeurIPS <a href="https://neurips.cc/Conferences/2024/StatementOnInclusivity">statement</a> on inclusivity</li><li>CTOL Digital Solutions <a href="https://www.ctol.digital/news/neurips-2024-controversy-mit-professor-remarks-chinese-researchers-triumphs/">article</a> - NeurIPS 2024 Sparks Controversy: MIT Professor's Remarks Ignite "Racism" Backlash Amid Chinese Researchers’ Triumphs</li><li>(1/2) NeurIPS Best <a href="https://arxiv.org/abs/2404.02905">Paper</a> - Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction</li><li>Visual Autoregressive Model <a href="https://var-integrity-report.github.io/">report</a> <em>this link now provides a 404 error</em><ul><li>Don't worry, here it is on <a href="https://archive.is/5GklT">archive.is</a></li></ul></li><li>Reuters <a href="https://www.reuters.com/technology/artificial-intelligence/bytedance-seeks-11-mln-damages-intern-ai-breach-case-report-says-2024-11-28/">article</a> - ByteDance seeks $1.1 mln damages from intern in AI breach case, report says</li><li>CTOL Digital Solutions <a href="https://www.ctol.digital/news/ai-genius-neurips-win-bytedance-legal-battle/">article</a> - NeurIPS Award Winner Entangled in ByteDance's AI Sabotage Accusations: The Two Tales of an AI Genius</li><li>Reddit post on Ilya's <a href="https://www.reddit.com/r/singularity/comments/1hdrjvq/ilyas_full_talk_at_neurips_2024_pretraining_as_we/">talk</a></li><li>SoLaR workshop <a href="https://solar-neurips.github.io/">page</a></li></ul><p><strong>Referenced Sources</strong></p><ul><li>Harvard Data Science Review <a href="https://hdsr.mitpress.mit.edu/pub/g9mau4m0/release/2">article</a> - Data Science at the Singularity</li><li><a href="https://arxiv.org/abs/2204.10817">Paper</a> - Reward Reports for Reinforcement Learning</li><li><a href="https://arxiv.org/abs/2002.09398">Paper</a> - It's Not What Machines Can Learn, It's What We Cannot Teach</li><li><a href="https://arxiv.org/abs/2003.12206">Paper</a> - NeurIPS Reproducibility Program</li><li><a href="https://arxiv.org/abs/2003.08505">Paper</a> - A Metric Learning Reality Check</li></ul><p><strong>Improving Datasets, Benchmarks, and Measurements</strong></p><ul><li>Tutorial <a href="https://neurips.cc/virtual/2024/tutorial/99528">video</a> + <a href="https://neurips.cc/media/neurips-2024/Slides/99528_aXgzqdX.pdf">slides</a> - Experimental Design and Analysis for AI Researchers <em>(I think you need to have attended NeurIPS to access the recording, but I couldn't find a different version)</em></li><li><a href="https://betterbench.stanford.edu/">Paper</a> - BetterBench: Assessing AI Benchmarks, Uncovering Issues, and Establishing Best Practices</li><li><a href="https://www.safetywashing.ai/">Paper</a> - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?</li><li><a href="https://arxiv.org/abs/2411.00266">Paper</a> - A Systematic Review of NeurIPS Dataset Management Practices</li><li><a href="https://arxiv.org/abs/2410.22473">Paper</a> - The State of Data Curation at NeurIPS: An Assessment of Dataset Development Practices in the Datasets and Benchmarks Track</li><li><a href="https://arxiv.org/abs/2410.24100">Paper</a> - Benchmark Repositories for Better Benchmarking</li><li><a href="https://research.google/blog/croissant-a-metadata-format-for-ml-ready-datasets/">Paper</a> - Croissant: A Metadata Format for ML-Ready Datasets</li><li><a href="https://arxiv.org/abs/2406.09867">Paper</a> - Rethinking the Evaluation of Out-of-Distribution Detection: A Sorites Paradox</li><li><a href="https://arxiv.org/abs/2411.10939">Paper</a> - Evaluating Generative AI Systems is a Social Science Measurement Challenge</li><li><a href="https://arxiv.org/abs/2409.00844">Paper</a> - Report Cards: Qualitative Evaluation of LLMs</li></ul><p><strong>Governance Related</strong></p><ul><li><a href="https://arxiv.org/abs/2412.03824">Paper</a> - Towards Data Governance of Frontier AI Models</li><li><a href="https://openreview.net/forum?id=St6azqVuqs">Paper</a> - Ways Forward for Global AI Benefit Sharing</li><li><a href="https://arxiv.org/abs/2410.02230">Paper</a> - How do we warn downstream model providers of upstream risks?<ul><li>Unified Model Records <a href="https://modelrecord.com/">tool</a></li></ul></li><li><a href="https://openreview.net/forum?id=OeT2vCFqYY">Paper</a> - Policy Dreamer: Diverse Public Policy Creation via Elicitation and Simulation of Human Preferences</li><li><a href="https://arxiv.org/abs/2409.14055">Paper</a> - Monitoring Human Dependence on AI Systems with Reliance Drills</li><li><a href="https://arxiv.org/abs/2411.19211">Paper</a> - On the Ethical Considerations of Generative Agents</li><li><a href="https://openreview.net/forum?id=OvGYbqOEki">Paper</a> - GPAI Evaluation Standards Taskforce: Towards Effective AI Governance</li><li><a href="https://openreview.net/forum?id=EH6SmoChx9">Paper</a> - Levels of Autonomy: Liability in the age of AI Agents</li></ul><p><strong>Certified Bangers + Useful Tools</strong></p><ul><li><a href="https://arxiv.org/abs/2402.07712">Paper</a> - Model Collapse Demystified: The Case of Regression</li><li><a href="https://arxiv.org/abs/2405.19534">Paper</a> - Preference Learning Algorithms Do Not Learn Preference Rankings</li><li>LLM Dataset Inference <a href="https://arxiv.org/abs/2406.06443">paper</a> + <a href="https://github.com/pratyushmaini/llm_dataset_inference/">repo</a></li><li>dattri <a href="https://arxiv.org/abs/2410.04555">paper</a> + <a href="https://github.com/TRAIS-Lab/dattri">repo</a></li><li>DeTikZify <a href="https://arxiv.org/abs/2405.15306">paper</a> + <a href="https://github.com/potamides/DeTikZify">repo</a></li></ul><p><strong>Fun Benchmarks/Datasets</strong></p><ul><li>Paloma <a href="https://arxiv.org/abs/2312.10523">paper</a> + <a href="https://huggingface.co/datasets/allenai/paloma">dataset</a></li><li>RedPajama <a href="https://arxiv.org/abs/2411.12372">paper</a> + <a href="https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2">dataset</a></li><li>Assemblage <a href="https://assemblage-dataset.net/">webpage</a></li><li>WikiDBs <a href="https://wikidbs.github.io/">webpage</a></li><li>WhodunitBench <a href="https://github.com/jun0wanan/WhodunitBench-Murder_Mystery_Games">repo</a></li><li>ApeBench <a href="https://arxiv.org/abs/2411.00180">paper</a> + <a href="https://github.com/tum-pbs/apebench">repo</a></li><li>WorkArena++ <a href="https://arxiv.org/abs/2407.05291">paper</a></li></ul><p><strong>Other Sources</strong></p><ul><li><a href="https://arxiv.org/abs/2412.07066">Paper</a> - The Mirage of Artificial Intelligence Terms of Use Restrictions</li><li><a href="https://d..."></a></li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 30 Dec 2024 10:00:00 -0700</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/5075e6ee/f7a77a5b.mp3" length="65680936" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>5217</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>What happens when you bring over 15,000 machine learning nerds to one city? If your guess didn't include racism, sabotage and scandal, belated epiphanies, a spicy SoLaR panel, and many fantastic research papers, you wouldn't have captured my experience. In this episode we discuss the drama and takeaways from NeurIPS 2024.</p><p><em>Posters available at time of episode preparation can be found on the episode </em><a href="https://kairos.fm/muckraikers/e010/"><em>webpage</em></a><em>.</em></p><p>EPISODE RECORDED 2024.12.22</p><p><br></p><ul><li>(00:00) - Recording date</li>
<li>(00:05) - Intro</li>
<li>(00:44) - Obligatory mentions</li>
<li>(01:54) - SoLaR panel</li>
<li>(18:43) - Test of Time</li>
<li>(24:17) - And now: science!</li>
<li>(28:53) - Downsides of benchmarks</li>
<li>(41:39) - Improving the science of ML</li>
<li>(53:07) - Performativity</li>
<li>(57:33) - NopenAI and Nanthropic</li>
<li>(01:09:35) - Fun/interesting papers</li>
<li>(01:13:12) - Initial takes on o3</li>
<li>(01:18:12) - WorkArena</li>
<li>(01:25:00) - Outro</li>
</ul><br><strong><br>Links</strong><p><em>Note: many workshop papers had not yet been published to arXiv as of preparing this episode, the OpenReview submission page is provided in these cases. </em></p><ul><li>NeurIPS <a href="https://neurips.cc/Conferences/2024/StatementOnInclusivity">statement</a> on inclusivity</li><li>CTOL Digital Solutions <a href="https://www.ctol.digital/news/neurips-2024-controversy-mit-professor-remarks-chinese-researchers-triumphs/">article</a> - NeurIPS 2024 Sparks Controversy: MIT Professor's Remarks Ignite "Racism" Backlash Amid Chinese Researchers’ Triumphs</li><li>(1/2) NeurIPS Best <a href="https://arxiv.org/abs/2404.02905">Paper</a> - Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction</li><li>Visual Autoregressive Model <a href="https://var-integrity-report.github.io/">report</a> <em>this link now provides a 404 error</em><ul><li>Don't worry, here it is on <a href="https://archive.is/5GklT">archive.is</a></li></ul></li><li>Reuters <a href="https://www.reuters.com/technology/artificial-intelligence/bytedance-seeks-11-mln-damages-intern-ai-breach-case-report-says-2024-11-28/">article</a> - ByteDance seeks $1.1 mln damages from intern in AI breach case, report says</li><li>CTOL Digital Solutions <a href="https://www.ctol.digital/news/ai-genius-neurips-win-bytedance-legal-battle/">article</a> - NeurIPS Award Winner Entangled in ByteDance's AI Sabotage Accusations: The Two Tales of an AI Genius</li><li>Reddit post on Ilya's <a href="https://www.reddit.com/r/singularity/comments/1hdrjvq/ilyas_full_talk_at_neurips_2024_pretraining_as_we/">talk</a></li><li>SoLaR workshop <a href="https://solar-neurips.github.io/">page</a></li></ul><p><strong>Referenced Sources</strong></p><ul><li>Harvard Data Science Review <a href="https://hdsr.mitpress.mit.edu/pub/g9mau4m0/release/2">article</a> - Data Science at the Singularity</li><li><a href="https://arxiv.org/abs/2204.10817">Paper</a> - Reward Reports for Reinforcement Learning</li><li><a href="https://arxiv.org/abs/2002.09398">Paper</a> - It's Not What Machines Can Learn, It's What We Cannot Teach</li><li><a href="https://arxiv.org/abs/2003.12206">Paper</a> - NeurIPS Reproducibility Program</li><li><a href="https://arxiv.org/abs/2003.08505">Paper</a> - A Metric Learning Reality Check</li></ul><p><strong>Improving Datasets, Benchmarks, and Measurements</strong></p><ul><li>Tutorial <a href="https://neurips.cc/virtual/2024/tutorial/99528">video</a> + <a href="https://neurips.cc/media/neurips-2024/Slides/99528_aXgzqdX.pdf">slides</a> - Experimental Design and Analysis for AI Researchers <em>(I think you need to have attended NeurIPS to access the recording, but I couldn't find a different version)</em></li><li><a href="https://betterbench.stanford.edu/">Paper</a> - BetterBench: Assessing AI Benchmarks, Uncovering Issues, and Establishing Best Practices</li><li><a href="https://www.safetywashing.ai/">Paper</a> - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?</li><li><a href="https://arxiv.org/abs/2411.00266">Paper</a> - A Systematic Review of NeurIPS Dataset Management Practices</li><li><a href="https://arxiv.org/abs/2410.22473">Paper</a> - The State of Data Curation at NeurIPS: An Assessment of Dataset Development Practices in the Datasets and Benchmarks Track</li><li><a href="https://arxiv.org/abs/2410.24100">Paper</a> - Benchmark Repositories for Better Benchmarking</li><li><a href="https://research.google/blog/croissant-a-metadata-format-for-ml-ready-datasets/">Paper</a> - Croissant: A Metadata Format for ML-Ready Datasets</li><li><a href="https://arxiv.org/abs/2406.09867">Paper</a> - Rethinking the Evaluation of Out-of-Distribution Detection: A Sorites Paradox</li><li><a href="https://arxiv.org/abs/2411.10939">Paper</a> - Evaluating Generative AI Systems is a Social Science Measurement Challenge</li><li><a href="https://arxiv.org/abs/2409.00844">Paper</a> - Report Cards: Qualitative Evaluation of LLMs</li></ul><p><strong>Governance Related</strong></p><ul><li><a href="https://arxiv.org/abs/2412.03824">Paper</a> - Towards Data Governance of Frontier AI Models</li><li><a href="https://openreview.net/forum?id=St6azqVuqs">Paper</a> - Ways Forward for Global AI Benefit Sharing</li><li><a href="https://arxiv.org/abs/2410.02230">Paper</a> - How do we warn downstream model providers of upstream risks?<ul><li>Unified Model Records <a href="https://modelrecord.com/">tool</a></li></ul></li><li><a href="https://openreview.net/forum?id=OeT2vCFqYY">Paper</a> - Policy Dreamer: Diverse Public Policy Creation via Elicitation and Simulation of Human Preferences</li><li><a href="https://arxiv.org/abs/2409.14055">Paper</a> - Monitoring Human Dependence on AI Systems with Reliance Drills</li><li><a href="https://arxiv.org/abs/2411.19211">Paper</a> - On the Ethical Considerations of Generative Agents</li><li><a href="https://openreview.net/forum?id=OvGYbqOEki">Paper</a> - GPAI Evaluation Standards Taskforce: Towards Effective AI Governance</li><li><a href="https://openreview.net/forum?id=EH6SmoChx9">Paper</a> - Levels of Autonomy: Liability in the age of AI Agents</li></ul><p><strong>Certified Bangers + Useful Tools</strong></p><ul><li><a href="https://arxiv.org/abs/2402.07712">Paper</a> - Model Collapse Demystified: The Case of Regression</li><li><a href="https://arxiv.org/abs/2405.19534">Paper</a> - Preference Learning Algorithms Do Not Learn Preference Rankings</li><li>LLM Dataset Inference <a href="https://arxiv.org/abs/2406.06443">paper</a> + <a href="https://github.com/pratyushmaini/llm_dataset_inference/">repo</a></li><li>dattri <a href="https://arxiv.org/abs/2410.04555">paper</a> + <a href="https://github.com/TRAIS-Lab/dattri">repo</a></li><li>DeTikZify <a href="https://arxiv.org/abs/2405.15306">paper</a> + <a href="https://github.com/potamides/DeTikZify">repo</a></li></ul><p><strong>Fun Benchmarks/Datasets</strong></p><ul><li>Paloma <a href="https://arxiv.org/abs/2312.10523">paper</a> + <a href="https://huggingface.co/datasets/allenai/paloma">dataset</a></li><li>RedPajama <a href="https://arxiv.org/abs/2411.12372">paper</a> + <a href="https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2">dataset</a></li><li>Assemblage <a href="https://assemblage-dataset.net/">webpage</a></li><li>WikiDBs <a href="https://wikidbs.github.io/">webpage</a></li><li>WhodunitBench <a href="https://github.com/jun0wanan/WhodunitBench-Murder_Mystery_Games">repo</a></li><li>ApeBench <a href="https://arxiv.org/abs/2411.00180">paper</a> + <a href="https://github.com/tum-pbs/apebench">repo</a></li><li>WorkArena++ <a href="https://arxiv.org/abs/2407.05291">paper</a></li></ul><p><strong>Other Sources</strong></p><ul><li><a href="https://arxiv.org/abs/2412.07066">Paper</a> - The Mirage of Artificial Intelligence Terms of Use Restrictions</li><li><a href="https://d..."></a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, NeurIPS, NeurIPS 2024, Conference, Machine Learning, Benchmarks</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/5075e6ee/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>OpenAI's o1 System Card, Literally Migraine Inducing</title>
      <itunes:episode>9</itunes:episode>
      <podcast:episode>9</podcast:episode>
      <itunes:title>OpenAI's o1 System Card, Literally Migraine Inducing</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e7be9d69-92e2-469a-a346-c09a7f5a121e</guid>
      <link>https://kairos.fm/muckraikers/e009/</link>
      <description>
        <![CDATA[<p>The idea of <em>model cards</em>, which was introduced as a measure to increase transparency and understanding of LLMs, has been perverted into the marketing gimmick characterized by OpenAI's o1 system card. To demonstrate the adversarial stance we believe is necessary to draw meaning from these press-releases-in-disguise, we conduct a close read of the system card. Be warned, there's a lot of muck in this one.</p><p><em>Note: All figures/tables discussed in the podcast can be found on the podcast website at https://kairos.fm/muckraikers/e009/</em></p><p><br></p><ul><li>(00:00) - Recorded 2024.12.08</li>
<li>(00:54) - Actual intro</li>
<li>(03:00) - System cards vs. academic papers</li>
<li>(05:36) - Starting off sus</li>
<li>(08:28) - o1.continued</li>
<li>(12:23) - Rant #1: figure 1</li>
<li>(18:27) - A diamond in the rough</li>
<li>(19:41) - Hiding copyright violations</li>
<li>(21:29) - Rant #2: Jacob on "hallucinations"</li>
<li>(25:55) - More ranting and "hallucination" rate comparison</li>
<li>(31:54) - Fairness, bias, and bad science comms</li>
<li>(35:41) - System, dev, and user prompt jailbreaking</li>
<li>(39:28) - Chain-of-thought and Rao-Blackwellization</li>
<li>(44:43) - "Red-teaming"</li>
<li>(49:00) - Apollo's bit</li>
<li>(51:28) - METR's bit</li>
<li>(59:51) - Pass@???</li>
<li>(01:04:45) - SWE Verified</li>
<li>(01:05:44) - Appendix bias metrics</li>
<li>(01:10:17) - The muck and the meaning</li>
</ul><br><strong><br>Links</strong><ul><li>o1 <a href="https://cdn.openai.com/o1-system-card-20241205.pdf">system card</a></li><li>OpenAI press release <a href="https://openai.com/12-days/">collection</a> - 12 Days of OpenAI</li></ul><p><strong><br>Additional o1 Coverage</strong></p><ul><li>NIST + AISI [report] - US AISI and UK AISI Joint Pre-Deployment Test</li><li>Apollo Research's <a href="https://static1.squarespace.com/static/6593e7097565990e65c886fd/t/67620d38fa0ceb12041ba585/1734479163821/in_context_scheming_paper_v2.pdf">paper</a> - Frontier Models are Capable of In-context Scheming</li><li>VentureBeat <a href="https://venturebeat.com/ai/openai-launches-full-o1-model-with-34-reduced-error-rate-debuts-chatgpt-pro/">article</a> - OpenAI launches full o1 model with image uploads and analysis, debuts ChatGPT Pro</li><li>The Atlantic <a href="https://www.theatlantic.com/technology/archive/2024/12/openai-o1-reasoning-models/680906/?gift=iWa_iB9lkw4UuiWbIbrWGdT4_sPi9gCLOZGiikclbz8&amp;utm_source=copy-link&amp;utm_medium=social&amp;utm_campaign=share">article</a> - The GPT Era Is Already Ending</li></ul><p><strong><br>On Data Labelers</strong></p><ul><li>60 Minutes <a href="https://www.cbsnews.com/news/labelers-training-ai-say-theyre-overworked-underpaid-and-exploited-60-minutes-transcript/">article + video</a> - Labelers training AI say they're overworked, underpaid and exploited by big American tech companies</li><li>Reflections <a href="https://4sonline.org/news_manager.php?page=36940">article</a> - The hidden health dangers of data labeling in AI development</li><li>Privacy International <a href="https://privacyinternational.org/explainer/5357/humans-ai-loop-data-labelers-behind-some-most-powerful-llms-training-datasets">article</a> = Humans in the AI loop: the data labelers behind some of the most powerful LLMs' training datasets</li></ul><p><strong><br>Chain-of-Thought Papers Cited</strong></p><ul><li><a href="https://arxiv.org/abs/2307.13702">Paper</a> - Measuring Faithfulness in Chain-of-Thought Reasoning</li><li><a href="https://arxiv.org/abs/2305.04388">Paper</a> - Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting</li><li><a href="https://arxiv.org/abs/2406.10625">Paper</a> - On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models</li><li><a href="https://arxiv.org/abs/2402.04614">Paper</a> - Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models</li></ul><p><strong><br>Other Mentioned/Relevant Sources</strong></p><ul><li>Andy Jones <a href="https://andrewcharlesjones.github.io/journal/rao-blackwellization.html">blogpost</a> - Rao-Blackwellization</li><li><a href="https://arxiv.org/abs/2407.07890">Paper</a> - Training on the Test Task Confounds Evaluation and Emergence</li><li><a href="https://arxiv.org/abs/2412.03556">Paper</a> - Best-of-N Jailbreaking</li><li>Research <a href="https://www.swebench.com/">landing page</a> - SWE Bench</li><li><a href="https://www.kaggle.com/competitions/konwinski-prize">Code Competition</a> - Konwinski Prize</li><li>Lakera <a href="https://gandalf.lakera.ai/do-not-tell">game</a> = Gandalf</li><li>Kate Crawford's <a href="https://yalebooks.yale.edu/book/9780300264630/atlas-of-ai/">Atlas of AI</a></li><li>BlueDot Impact's <a href="https://course.aisafetyfundamentals.com/home/intro-to-tai">course</a> - Intro to Transformative AI</li></ul><p><strong><br>Unrelated Developments</strong></p><ul><li>Cruz's <a href="https://www.commerce.senate.gov/services/files/55267EFF-11A8-4BD6-BE1E-61452A3C48E3">letter</a> to Merrick Garland</li><li>AWS News Blog <a href="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-frontier-intelligence-and-industry-leading-price-performance/">article</a> - Introducing Amazon Nova foundation models: Frontier intelligence and industry leading price performance</li><li>BleepingComputer <a href="https://www.bleepingcomputer.com/news/security/ultralytics-ai-model-hijacked-to-infect-thousands-with-cryptominer/">article</a> - Ultralytics AI model hijacked to infect thousands with cryptominer</li><li>The Register <a href="https://www.theregister.com/2024/12/07/microsoft_copilot_vision/">article</a> - Microsoft teases Copilot Vision, the AI sidekick that judges your tabs</li><li>Fox Business <a href="https://www.foxbusiness.com/technology/openai-ceo-sam-altman-looking-forward-working-trump-admin-says-us-must-build-best-ai-infrastructure">article</a> - OpenAI CEO Sam Altman looking forward to working with Trump admin, says US must build best AI infrastructure</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>The idea of <em>model cards</em>, which was introduced as a measure to increase transparency and understanding of LLMs, has been perverted into the marketing gimmick characterized by OpenAI's o1 system card. To demonstrate the adversarial stance we believe is necessary to draw meaning from these press-releases-in-disguise, we conduct a close read of the system card. Be warned, there's a lot of muck in this one.</p><p><em>Note: All figures/tables discussed in the podcast can be found on the podcast website at https://kairos.fm/muckraikers/e009/</em></p><p><br></p><ul><li>(00:00) - Recorded 2024.12.08</li>
<li>(00:54) - Actual intro</li>
<li>(03:00) - System cards vs. academic papers</li>
<li>(05:36) - Starting off sus</li>
<li>(08:28) - o1.continued</li>
<li>(12:23) - Rant #1: figure 1</li>
<li>(18:27) - A diamond in the rough</li>
<li>(19:41) - Hiding copyright violations</li>
<li>(21:29) - Rant #2: Jacob on "hallucinations"</li>
<li>(25:55) - More ranting and "hallucination" rate comparison</li>
<li>(31:54) - Fairness, bias, and bad science comms</li>
<li>(35:41) - System, dev, and user prompt jailbreaking</li>
<li>(39:28) - Chain-of-thought and Rao-Blackwellization</li>
<li>(44:43) - "Red-teaming"</li>
<li>(49:00) - Apollo's bit</li>
<li>(51:28) - METR's bit</li>
<li>(59:51) - Pass@???</li>
<li>(01:04:45) - SWE Verified</li>
<li>(01:05:44) - Appendix bias metrics</li>
<li>(01:10:17) - The muck and the meaning</li>
</ul><br><strong><br>Links</strong><ul><li>o1 <a href="https://cdn.openai.com/o1-system-card-20241205.pdf">system card</a></li><li>OpenAI press release <a href="https://openai.com/12-days/">collection</a> - 12 Days of OpenAI</li></ul><p><strong><br>Additional o1 Coverage</strong></p><ul><li>NIST + AISI [report] - US AISI and UK AISI Joint Pre-Deployment Test</li><li>Apollo Research's <a href="https://static1.squarespace.com/static/6593e7097565990e65c886fd/t/67620d38fa0ceb12041ba585/1734479163821/in_context_scheming_paper_v2.pdf">paper</a> - Frontier Models are Capable of In-context Scheming</li><li>VentureBeat <a href="https://venturebeat.com/ai/openai-launches-full-o1-model-with-34-reduced-error-rate-debuts-chatgpt-pro/">article</a> - OpenAI launches full o1 model with image uploads and analysis, debuts ChatGPT Pro</li><li>The Atlantic <a href="https://www.theatlantic.com/technology/archive/2024/12/openai-o1-reasoning-models/680906/?gift=iWa_iB9lkw4UuiWbIbrWGdT4_sPi9gCLOZGiikclbz8&amp;utm_source=copy-link&amp;utm_medium=social&amp;utm_campaign=share">article</a> - The GPT Era Is Already Ending</li></ul><p><strong><br>On Data Labelers</strong></p><ul><li>60 Minutes <a href="https://www.cbsnews.com/news/labelers-training-ai-say-theyre-overworked-underpaid-and-exploited-60-minutes-transcript/">article + video</a> - Labelers training AI say they're overworked, underpaid and exploited by big American tech companies</li><li>Reflections <a href="https://4sonline.org/news_manager.php?page=36940">article</a> - The hidden health dangers of data labeling in AI development</li><li>Privacy International <a href="https://privacyinternational.org/explainer/5357/humans-ai-loop-data-labelers-behind-some-most-powerful-llms-training-datasets">article</a> = Humans in the AI loop: the data labelers behind some of the most powerful LLMs' training datasets</li></ul><p><strong><br>Chain-of-Thought Papers Cited</strong></p><ul><li><a href="https://arxiv.org/abs/2307.13702">Paper</a> - Measuring Faithfulness in Chain-of-Thought Reasoning</li><li><a href="https://arxiv.org/abs/2305.04388">Paper</a> - Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting</li><li><a href="https://arxiv.org/abs/2406.10625">Paper</a> - On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models</li><li><a href="https://arxiv.org/abs/2402.04614">Paper</a> - Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models</li></ul><p><strong><br>Other Mentioned/Relevant Sources</strong></p><ul><li>Andy Jones <a href="https://andrewcharlesjones.github.io/journal/rao-blackwellization.html">blogpost</a> - Rao-Blackwellization</li><li><a href="https://arxiv.org/abs/2407.07890">Paper</a> - Training on the Test Task Confounds Evaluation and Emergence</li><li><a href="https://arxiv.org/abs/2412.03556">Paper</a> - Best-of-N Jailbreaking</li><li>Research <a href="https://www.swebench.com/">landing page</a> - SWE Bench</li><li><a href="https://www.kaggle.com/competitions/konwinski-prize">Code Competition</a> - Konwinski Prize</li><li>Lakera <a href="https://gandalf.lakera.ai/do-not-tell">game</a> = Gandalf</li><li>Kate Crawford's <a href="https://yalebooks.yale.edu/book/9780300264630/atlas-of-ai/">Atlas of AI</a></li><li>BlueDot Impact's <a href="https://course.aisafetyfundamentals.com/home/intro-to-tai">course</a> - Intro to Transformative AI</li></ul><p><strong><br>Unrelated Developments</strong></p><ul><li>Cruz's <a href="https://www.commerce.senate.gov/services/files/55267EFF-11A8-4BD6-BE1E-61452A3C48E3">letter</a> to Merrick Garland</li><li>AWS News Blog <a href="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-frontier-intelligence-and-industry-leading-price-performance/">article</a> - Introducing Amazon Nova foundation models: Frontier intelligence and industry leading price performance</li><li>BleepingComputer <a href="https://www.bleepingcomputer.com/news/security/ultralytics-ai-model-hijacked-to-infect-thousands-with-cryptominer/">article</a> - Ultralytics AI model hijacked to infect thousands with cryptominer</li><li>The Register <a href="https://www.theregister.com/2024/12/07/microsoft_copilot_vision/">article</a> - Microsoft teases Copilot Vision, the AI sidekick that judges your tabs</li><li>Fox Business <a href="https://www.foxbusiness.com/technology/openai-ceo-sam-altman-looking-forward-working-trump-admin-says-us-must-build-best-ai-infrastructure">article</a> - OpenAI CEO Sam Altman looking forward to working with Trump admin, says US must build best AI infrastructure</li></ul>]]>
      </content:encoded>
      <pubDate>Sun, 22 Dec 2024 19:00:00 -0700</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/c2ec3a0e/32301ce9.mp3" length="54055370" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>4597</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>The idea of <em>model cards</em>, which was introduced as a measure to increase transparency and understanding of LLMs, has been perverted into the marketing gimmick characterized by OpenAI's o1 system card. To demonstrate the adversarial stance we believe is necessary to draw meaning from these press-releases-in-disguise, we conduct a close read of the system card. Be warned, there's a lot of muck in this one.</p><p><em>Note: All figures/tables discussed in the podcast can be found on the podcast website at https://kairos.fm/muckraikers/e009/</em></p><p><br></p><ul><li>(00:00) - Recorded 2024.12.08</li>
<li>(00:54) - Actual intro</li>
<li>(03:00) - System cards vs. academic papers</li>
<li>(05:36) - Starting off sus</li>
<li>(08:28) - o1.continued</li>
<li>(12:23) - Rant #1: figure 1</li>
<li>(18:27) - A diamond in the rough</li>
<li>(19:41) - Hiding copyright violations</li>
<li>(21:29) - Rant #2: Jacob on "hallucinations"</li>
<li>(25:55) - More ranting and "hallucination" rate comparison</li>
<li>(31:54) - Fairness, bias, and bad science comms</li>
<li>(35:41) - System, dev, and user prompt jailbreaking</li>
<li>(39:28) - Chain-of-thought and Rao-Blackwellization</li>
<li>(44:43) - "Red-teaming"</li>
<li>(49:00) - Apollo's bit</li>
<li>(51:28) - METR's bit</li>
<li>(59:51) - Pass@???</li>
<li>(01:04:45) - SWE Verified</li>
<li>(01:05:44) - Appendix bias metrics</li>
<li>(01:10:17) - The muck and the meaning</li>
</ul><br><strong><br>Links</strong><ul><li>o1 <a href="https://cdn.openai.com/o1-system-card-20241205.pdf">system card</a></li><li>OpenAI press release <a href="https://openai.com/12-days/">collection</a> - 12 Days of OpenAI</li></ul><p><strong><br>Additional o1 Coverage</strong></p><ul><li>NIST + AISI [report] - US AISI and UK AISI Joint Pre-Deployment Test</li><li>Apollo Research's <a href="https://static1.squarespace.com/static/6593e7097565990e65c886fd/t/67620d38fa0ceb12041ba585/1734479163821/in_context_scheming_paper_v2.pdf">paper</a> - Frontier Models are Capable of In-context Scheming</li><li>VentureBeat <a href="https://venturebeat.com/ai/openai-launches-full-o1-model-with-34-reduced-error-rate-debuts-chatgpt-pro/">article</a> - OpenAI launches full o1 model with image uploads and analysis, debuts ChatGPT Pro</li><li>The Atlantic <a href="https://www.theatlantic.com/technology/archive/2024/12/openai-o1-reasoning-models/680906/?gift=iWa_iB9lkw4UuiWbIbrWGdT4_sPi9gCLOZGiikclbz8&amp;utm_source=copy-link&amp;utm_medium=social&amp;utm_campaign=share">article</a> - The GPT Era Is Already Ending</li></ul><p><strong><br>On Data Labelers</strong></p><ul><li>60 Minutes <a href="https://www.cbsnews.com/news/labelers-training-ai-say-theyre-overworked-underpaid-and-exploited-60-minutes-transcript/">article + video</a> - Labelers training AI say they're overworked, underpaid and exploited by big American tech companies</li><li>Reflections <a href="https://4sonline.org/news_manager.php?page=36940">article</a> - The hidden health dangers of data labeling in AI development</li><li>Privacy International <a href="https://privacyinternational.org/explainer/5357/humans-ai-loop-data-labelers-behind-some-most-powerful-llms-training-datasets">article</a> = Humans in the AI loop: the data labelers behind some of the most powerful LLMs' training datasets</li></ul><p><strong><br>Chain-of-Thought Papers Cited</strong></p><ul><li><a href="https://arxiv.org/abs/2307.13702">Paper</a> - Measuring Faithfulness in Chain-of-Thought Reasoning</li><li><a href="https://arxiv.org/abs/2305.04388">Paper</a> - Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting</li><li><a href="https://arxiv.org/abs/2406.10625">Paper</a> - On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models</li><li><a href="https://arxiv.org/abs/2402.04614">Paper</a> - Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models</li></ul><p><strong><br>Other Mentioned/Relevant Sources</strong></p><ul><li>Andy Jones <a href="https://andrewcharlesjones.github.io/journal/rao-blackwellization.html">blogpost</a> - Rao-Blackwellization</li><li><a href="https://arxiv.org/abs/2407.07890">Paper</a> - Training on the Test Task Confounds Evaluation and Emergence</li><li><a href="https://arxiv.org/abs/2412.03556">Paper</a> - Best-of-N Jailbreaking</li><li>Research <a href="https://www.swebench.com/">landing page</a> - SWE Bench</li><li><a href="https://www.kaggle.com/competitions/konwinski-prize">Code Competition</a> - Konwinski Prize</li><li>Lakera <a href="https://gandalf.lakera.ai/do-not-tell">game</a> = Gandalf</li><li>Kate Crawford's <a href="https://yalebooks.yale.edu/book/9780300264630/atlas-of-ai/">Atlas of AI</a></li><li>BlueDot Impact's <a href="https://course.aisafetyfundamentals.com/home/intro-to-tai">course</a> - Intro to Transformative AI</li></ul><p><strong><br>Unrelated Developments</strong></p><ul><li>Cruz's <a href="https://www.commerce.senate.gov/services/files/55267EFF-11A8-4BD6-BE1E-61452A3C48E3">letter</a> to Merrick Garland</li><li>AWS News Blog <a href="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-frontier-intelligence-and-industry-leading-price-performance/">article</a> - Introducing Amazon Nova foundation models: Frontier intelligence and industry leading price performance</li><li>BleepingComputer <a href="https://www.bleepingcomputer.com/news/security/ultralytics-ai-model-hijacked-to-infect-thousands-with-cryptominer/">article</a> - Ultralytics AI model hijacked to infect thousands with cryptominer</li><li>The Register <a href="https://www.theregister.com/2024/12/07/microsoft_copilot_vision/">article</a> - Microsoft teases Copilot Vision, the AI sidekick that judges your tabs</li><li>Fox Business <a href="https://www.foxbusiness.com/technology/openai-ceo-sam-altman-looking-forward-working-trump-admin-says-us-must-build-best-ai-infrastructure">article</a> - OpenAI CEO Sam Altman looking forward to working with Trump admin, says US must build best AI infrastructure</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, OpenAI, o1</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/c2ec3a0e/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>How  to Safely Handle Your AGI</title>
      <itunes:episode>8</itunes:episode>
      <podcast:episode>8</podcast:episode>
      <itunes:title>How  to Safely Handle Your AGI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7a0d9621-5ef8-49ce-ad16-8dff93ddb4d0</guid>
      <link>https://kairos.fm/muckraikers/e008/</link>
      <description>
        <![CDATA[<p>While on the campaign trail, Trump made claims about repealing Biden's Executive Order on AI, but what will actually be changed when he gets into office? We take this opportunity to examine policies being discussed or implemented by leading governments around the world.</p><p><br></p><ul><li>(00:00) - Intro</li>
<li>(00:29) - Hot off the press</li>
<li>(02:59) - Repealing the AI executive order?</li>
<li>(11:16) - "Manhattan" for AI</li>
<li>(24:33) - EU</li>
<li>(30:47) - UK</li>
<li>(39:27) - Bengio</li>
<li>(44:39) - Comparing EU/UK to USA</li>
<li>(45:23) - China</li>
<li>(51:12) - Taxes</li>
<li>(55:29) - The muck</li>
</ul><br><strong><br>Links</strong><ul><li>SFChronicle <a href="https://www.sfchronicle.com/business/article/us-gathers-allies-to-talk-ai-safety-trump-s-vow-19932278.php">article</a> - US gathers allies to talk AI safety as Trump's vow to undo Biden's AI policy overshadows their work</li><li>Trump's <a href="https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-promoting-use-trustworthy-artificial-intelligence-federal-government/">Executive Order</a> on AI (the AI governance executive order at home)</li><li>Biden's <a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/">Executive Order</a> on AI</li><li>Congressional <a href="https://www.uscc.gov/sites/default/files/2024-11/2024_Executive_Summary.pdf">report brief</a> which advises a "Manhattan Project for AI"</li></ul><p><strong>Non-USA</strong></p><ul><li>CAIRNE <a href="https://cairne.eu/cern-for-ai/">resource collection</a> on CERN for AI</li><li>UK Frontier AI Taskforce <a href="https://www.gov.uk/government/publications/frontier-ai-taskforce-first-progress-report/frontier-ai-taskforce-first-progress-report">report</a> (2023)</li><li>International <a href="https://www.gov.uk/government/publications/international-scientific-report-on-the-safety-of-advanced-ai/international-scientific-report-on-the-safety-of-advanced-ai-interim-report">interim report</a> (2024)</li><li>Bengio's <a href="https://www.journalofdemocracy.org/articles/ai-and-catastrophic-risk/">paper</a> - AI and Catastrophic Risk</li><li>Davidad's Safeguarded AI <a href="https://www.aria.org.uk/programme-safeguarded-ai/">program</a> at ARIA</li><li>MIT Technology Review <a href="https://www.technologyreview.com/2024/01/17/1086704/china-ai-regulation-changes-2024/">article</a> - Four things to know about China’s new AI rules in 2024</li><li>GovInsider <a href="https://govinsider.asia/intl-en/article/australias-national-policy-for-ethical-use-of-ai-starts-to-take-shape">article</a> - Australia’s national policy for ethical use of AI starts to take shape</li><li>Future of Privacy forum <a href="https://fpf.org/blog/global/the-african-unions-continental-ai-strategy-data-protection-and-governance-laws-set-to-play-a-key-role-in-ai-regulation/">article</a> - The African Union’s Continental AI Strategy: Data Protection and Governance Laws Set to Play a Key Role in AI Regulation</li></ul><p><strong>Taxes</strong></p><ul><li>Macroeconomic Dynamics <a href="https://www.cambridge.org/core/journals/macroeconomic-dynamics/article/abs/automation-stagnation-and-the-implications-of-a-robot-tax/3D796A6890203B0C268EE4D6DF18A39B">paper</a> - Automation, Stagnation, and the Implications of a Robot Tax</li><li>CESifo <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4811796">paper</a> - AI, Automation, and Taxation</li><li>GavTax <a href="https://gavtax.com/taxation-of-artificial-intelligence-and-automation/">article</a> - Taxation of Artificial Intelligence and Automation</li></ul><p><strong>Perplexity Pages</strong></p><ul><li>CERN for AI <a href="https://www.perplexity.ai/page/europe-s-eur100b-cern-for-ai-p-xpSRjjyrRMiZJ.GdP1.PEA">page</a></li><li>China's AI policy <a href="https://www.perplexity.ai/search/what-is-chinese-ai-policy-regu-gf11zQ_vTNGoj80n5evyow">page</a></li><li>Singapore's AI policy <a href="https://www.perplexity.ai/search/what-is-singapores-ai-governan-ljJgnM38STeDEZwrKzY0Kg">page</a></li><li>AI policy in Africa, India, Australia <a href="https://perplexity.ai/search/what-are-the-ai-governancy-pol-XCD7tNKKSWmnswxW6iV0Zg">page</a></li></ul><p><strong>Other Sources</strong></p><ul><li>Artificial Intelligence Made Simple <a href="https://artificialintelligencemadesimple.substack.com/p/nyts-ai-outperforms-doctors-story">article</a> - NYT's "AI Outperforms Doctors" Story Is Wrong</li><li>Intel <a href="https://download.intel.com/newsroom/2024/client-computing/ai-pc-productivity-112024-report.pdf">report</a> - Reclaim Your Day: The Impact of AI PCs on Productivity</li><li>Heise Online <a href="https://www.heise.de/news/Anwender-an-KI-PCs-langsamer-Intel-sieht-Problem-in-unaufgeklaerten-Nutzern-10108194.html">article</a> - Users on AI PCs slower, Intel sees problem in unenlightened users</li><li>The Hacker News <a href="https://thehackernews.com/2024/11/north-korean-hackers-steal-10m-with-ai.html">article</a> - North Korean Hackers Steal $10M with AI-Driven Scams and Malware on LinkedIn</li><li>Futurism <a href="https://futurism.com/character-ai-pedophile-chatbots">article</a> - Character.AI Is Hosting Pedophile Chatbots That Groom Users Who Say They're Underage</li><li>Vice <a href="https://www.vice.com/en/article/ai-jesus-is-now-taking-confessions-at-a-church-in-switzerland/">article</a> - 'AI Jesus' Is Now Taking Confessions at a Church in Switzerland</li><li>Politico <a href="https://www.politico.com/news/2023/06/15/ai-ted-cruz-congress-00102116">article</a> - Ted Cruz: Congress 'doesn't know what the hell it's doing' with AI regulation</li><li>US Senate Committee on Commerce, Science, and Transportation <a href="https://www.commerce.senate.gov/2024/9/sen-cruz-sounds-alarm-over-industry-role-in-ai-czar-harris-s-censorship-agenda">press release</a> - Sen. Cruz Sounds Alarm Over Industry Role in AI Czar Harris’s Censorship Agenda</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>While on the campaign trail, Trump made claims about repealing Biden's Executive Order on AI, but what will actually be changed when he gets into office? We take this opportunity to examine policies being discussed or implemented by leading governments around the world.</p><p><br></p><ul><li>(00:00) - Intro</li>
<li>(00:29) - Hot off the press</li>
<li>(02:59) - Repealing the AI executive order?</li>
<li>(11:16) - "Manhattan" for AI</li>
<li>(24:33) - EU</li>
<li>(30:47) - UK</li>
<li>(39:27) - Bengio</li>
<li>(44:39) - Comparing EU/UK to USA</li>
<li>(45:23) - China</li>
<li>(51:12) - Taxes</li>
<li>(55:29) - The muck</li>
</ul><br><strong><br>Links</strong><ul><li>SFChronicle <a href="https://www.sfchronicle.com/business/article/us-gathers-allies-to-talk-ai-safety-trump-s-vow-19932278.php">article</a> - US gathers allies to talk AI safety as Trump's vow to undo Biden's AI policy overshadows their work</li><li>Trump's <a href="https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-promoting-use-trustworthy-artificial-intelligence-federal-government/">Executive Order</a> on AI (the AI governance executive order at home)</li><li>Biden's <a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/">Executive Order</a> on AI</li><li>Congressional <a href="https://www.uscc.gov/sites/default/files/2024-11/2024_Executive_Summary.pdf">report brief</a> which advises a "Manhattan Project for AI"</li></ul><p><strong>Non-USA</strong></p><ul><li>CAIRNE <a href="https://cairne.eu/cern-for-ai/">resource collection</a> on CERN for AI</li><li>UK Frontier AI Taskforce <a href="https://www.gov.uk/government/publications/frontier-ai-taskforce-first-progress-report/frontier-ai-taskforce-first-progress-report">report</a> (2023)</li><li>International <a href="https://www.gov.uk/government/publications/international-scientific-report-on-the-safety-of-advanced-ai/international-scientific-report-on-the-safety-of-advanced-ai-interim-report">interim report</a> (2024)</li><li>Bengio's <a href="https://www.journalofdemocracy.org/articles/ai-and-catastrophic-risk/">paper</a> - AI and Catastrophic Risk</li><li>Davidad's Safeguarded AI <a href="https://www.aria.org.uk/programme-safeguarded-ai/">program</a> at ARIA</li><li>MIT Technology Review <a href="https://www.technologyreview.com/2024/01/17/1086704/china-ai-regulation-changes-2024/">article</a> - Four things to know about China’s new AI rules in 2024</li><li>GovInsider <a href="https://govinsider.asia/intl-en/article/australias-national-policy-for-ethical-use-of-ai-starts-to-take-shape">article</a> - Australia’s national policy for ethical use of AI starts to take shape</li><li>Future of Privacy forum <a href="https://fpf.org/blog/global/the-african-unions-continental-ai-strategy-data-protection-and-governance-laws-set-to-play-a-key-role-in-ai-regulation/">article</a> - The African Union’s Continental AI Strategy: Data Protection and Governance Laws Set to Play a Key Role in AI Regulation</li></ul><p><strong>Taxes</strong></p><ul><li>Macroeconomic Dynamics <a href="https://www.cambridge.org/core/journals/macroeconomic-dynamics/article/abs/automation-stagnation-and-the-implications-of-a-robot-tax/3D796A6890203B0C268EE4D6DF18A39B">paper</a> - Automation, Stagnation, and the Implications of a Robot Tax</li><li>CESifo <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4811796">paper</a> - AI, Automation, and Taxation</li><li>GavTax <a href="https://gavtax.com/taxation-of-artificial-intelligence-and-automation/">article</a> - Taxation of Artificial Intelligence and Automation</li></ul><p><strong>Perplexity Pages</strong></p><ul><li>CERN for AI <a href="https://www.perplexity.ai/page/europe-s-eur100b-cern-for-ai-p-xpSRjjyrRMiZJ.GdP1.PEA">page</a></li><li>China's AI policy <a href="https://www.perplexity.ai/search/what-is-chinese-ai-policy-regu-gf11zQ_vTNGoj80n5evyow">page</a></li><li>Singapore's AI policy <a href="https://www.perplexity.ai/search/what-is-singapores-ai-governan-ljJgnM38STeDEZwrKzY0Kg">page</a></li><li>AI policy in Africa, India, Australia <a href="https://perplexity.ai/search/what-are-the-ai-governancy-pol-XCD7tNKKSWmnswxW6iV0Zg">page</a></li></ul><p><strong>Other Sources</strong></p><ul><li>Artificial Intelligence Made Simple <a href="https://artificialintelligencemadesimple.substack.com/p/nyts-ai-outperforms-doctors-story">article</a> - NYT's "AI Outperforms Doctors" Story Is Wrong</li><li>Intel <a href="https://download.intel.com/newsroom/2024/client-computing/ai-pc-productivity-112024-report.pdf">report</a> - Reclaim Your Day: The Impact of AI PCs on Productivity</li><li>Heise Online <a href="https://www.heise.de/news/Anwender-an-KI-PCs-langsamer-Intel-sieht-Problem-in-unaufgeklaerten-Nutzern-10108194.html">article</a> - Users on AI PCs slower, Intel sees problem in unenlightened users</li><li>The Hacker News <a href="https://thehackernews.com/2024/11/north-korean-hackers-steal-10m-with-ai.html">article</a> - North Korean Hackers Steal $10M with AI-Driven Scams and Malware on LinkedIn</li><li>Futurism <a href="https://futurism.com/character-ai-pedophile-chatbots">article</a> - Character.AI Is Hosting Pedophile Chatbots That Groom Users Who Say They're Underage</li><li>Vice <a href="https://www.vice.com/en/article/ai-jesus-is-now-taking-confessions-at-a-church-in-switzerland/">article</a> - 'AI Jesus' Is Now Taking Confessions at a Church in Switzerland</li><li>Politico <a href="https://www.politico.com/news/2023/06/15/ai-ted-cruz-congress-00102116">article</a> - Ted Cruz: Congress 'doesn't know what the hell it's doing' with AI regulation</li><li>US Senate Committee on Commerce, Science, and Transportation <a href="https://www.commerce.senate.gov/2024/9/sen-cruz-sounds-alarm-over-industry-role-in-ai-czar-harris-s-censorship-agenda">press release</a> - Sen. Cruz Sounds Alarm Over Industry Role in AI Czar Harris’s Censorship Agenda</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 02 Dec 2024 09:00:00 -0700</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/3c15afa1/5697d740.mp3" length="42761213" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>3482</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>While on the campaign trail, Trump made claims about repealing Biden's Executive Order on AI, but what will actually be changed when he gets into office? We take this opportunity to examine policies being discussed or implemented by leading governments around the world.</p><p><br></p><ul><li>(00:00) - Intro</li>
<li>(00:29) - Hot off the press</li>
<li>(02:59) - Repealing the AI executive order?</li>
<li>(11:16) - "Manhattan" for AI</li>
<li>(24:33) - EU</li>
<li>(30:47) - UK</li>
<li>(39:27) - Bengio</li>
<li>(44:39) - Comparing EU/UK to USA</li>
<li>(45:23) - China</li>
<li>(51:12) - Taxes</li>
<li>(55:29) - The muck</li>
</ul><br><strong><br>Links</strong><ul><li>SFChronicle <a href="https://www.sfchronicle.com/business/article/us-gathers-allies-to-talk-ai-safety-trump-s-vow-19932278.php">article</a> - US gathers allies to talk AI safety as Trump's vow to undo Biden's AI policy overshadows their work</li><li>Trump's <a href="https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-promoting-use-trustworthy-artificial-intelligence-federal-government/">Executive Order</a> on AI (the AI governance executive order at home)</li><li>Biden's <a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/">Executive Order</a> on AI</li><li>Congressional <a href="https://www.uscc.gov/sites/default/files/2024-11/2024_Executive_Summary.pdf">report brief</a> which advises a "Manhattan Project for AI"</li></ul><p><strong>Non-USA</strong></p><ul><li>CAIRNE <a href="https://cairne.eu/cern-for-ai/">resource collection</a> on CERN for AI</li><li>UK Frontier AI Taskforce <a href="https://www.gov.uk/government/publications/frontier-ai-taskforce-first-progress-report/frontier-ai-taskforce-first-progress-report">report</a> (2023)</li><li>International <a href="https://www.gov.uk/government/publications/international-scientific-report-on-the-safety-of-advanced-ai/international-scientific-report-on-the-safety-of-advanced-ai-interim-report">interim report</a> (2024)</li><li>Bengio's <a href="https://www.journalofdemocracy.org/articles/ai-and-catastrophic-risk/">paper</a> - AI and Catastrophic Risk</li><li>Davidad's Safeguarded AI <a href="https://www.aria.org.uk/programme-safeguarded-ai/">program</a> at ARIA</li><li>MIT Technology Review <a href="https://www.technologyreview.com/2024/01/17/1086704/china-ai-regulation-changes-2024/">article</a> - Four things to know about China’s new AI rules in 2024</li><li>GovInsider <a href="https://govinsider.asia/intl-en/article/australias-national-policy-for-ethical-use-of-ai-starts-to-take-shape">article</a> - Australia’s national policy for ethical use of AI starts to take shape</li><li>Future of Privacy forum <a href="https://fpf.org/blog/global/the-african-unions-continental-ai-strategy-data-protection-and-governance-laws-set-to-play-a-key-role-in-ai-regulation/">article</a> - The African Union’s Continental AI Strategy: Data Protection and Governance Laws Set to Play a Key Role in AI Regulation</li></ul><p><strong>Taxes</strong></p><ul><li>Macroeconomic Dynamics <a href="https://www.cambridge.org/core/journals/macroeconomic-dynamics/article/abs/automation-stagnation-and-the-implications-of-a-robot-tax/3D796A6890203B0C268EE4D6DF18A39B">paper</a> - Automation, Stagnation, and the Implications of a Robot Tax</li><li>CESifo <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4811796">paper</a> - AI, Automation, and Taxation</li><li>GavTax <a href="https://gavtax.com/taxation-of-artificial-intelligence-and-automation/">article</a> - Taxation of Artificial Intelligence and Automation</li></ul><p><strong>Perplexity Pages</strong></p><ul><li>CERN for AI <a href="https://www.perplexity.ai/page/europe-s-eur100b-cern-for-ai-p-xpSRjjyrRMiZJ.GdP1.PEA">page</a></li><li>China's AI policy <a href="https://www.perplexity.ai/search/what-is-chinese-ai-policy-regu-gf11zQ_vTNGoj80n5evyow">page</a></li><li>Singapore's AI policy <a href="https://www.perplexity.ai/search/what-is-singapores-ai-governan-ljJgnM38STeDEZwrKzY0Kg">page</a></li><li>AI policy in Africa, India, Australia <a href="https://perplexity.ai/search/what-are-the-ai-governancy-pol-XCD7tNKKSWmnswxW6iV0Zg">page</a></li></ul><p><strong>Other Sources</strong></p><ul><li>Artificial Intelligence Made Simple <a href="https://artificialintelligencemadesimple.substack.com/p/nyts-ai-outperforms-doctors-story">article</a> - NYT's "AI Outperforms Doctors" Story Is Wrong</li><li>Intel <a href="https://download.intel.com/newsroom/2024/client-computing/ai-pc-productivity-112024-report.pdf">report</a> - Reclaim Your Day: The Impact of AI PCs on Productivity</li><li>Heise Online <a href="https://www.heise.de/news/Anwender-an-KI-PCs-langsamer-Intel-sieht-Problem-in-unaufgeklaerten-Nutzern-10108194.html">article</a> - Users on AI PCs slower, Intel sees problem in unenlightened users</li><li>The Hacker News <a href="https://thehackernews.com/2024/11/north-korean-hackers-steal-10m-with-ai.html">article</a> - North Korean Hackers Steal $10M with AI-Driven Scams and Malware on LinkedIn</li><li>Futurism <a href="https://futurism.com/character-ai-pedophile-chatbots">article</a> - Character.AI Is Hosting Pedophile Chatbots That Groom Users Who Say They're Underage</li><li>Vice <a href="https://www.vice.com/en/article/ai-jesus-is-now-taking-confessions-at-a-church-in-switzerland/">article</a> - 'AI Jesus' Is Now Taking Confessions at a Church in Switzerland</li><li>Politico <a href="https://www.politico.com/news/2023/06/15/ai-ted-cruz-congress-00102116">article</a> - Ted Cruz: Congress 'doesn't know what the hell it's doing' with AI regulation</li><li>US Senate Committee on Commerce, Science, and Transportation <a href="https://www.commerce.senate.gov/2024/9/sen-cruz-sounds-alarm-over-industry-role-in-ai-czar-harris-s-censorship-agenda">press release</a> - Sen. Cruz Sounds Alarm Over Industry Role in AI Czar Harris’s Censorship Agenda</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, AI governance, USA, China, Executive Order</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/3c15afa1/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>The End of Scaling?</title>
      <itunes:episode>7</itunes:episode>
      <podcast:episode>7</podcast:episode>
      <itunes:title>The End of Scaling?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b62fabb0-e8b0-4ceb-9d21-3ce8b83f0360</guid>
      <link>https://kairos.fm/muckraikers/e007/</link>
      <description>
        <![CDATA[<p>Multiple news outlets, including The Information, Bloomberg, and Reuters [see sources] are reporting an "end of scaling" for the current AI paradigm. In this episode we look into these articles, as well as a wide variety of economic forecasting, empirical analysis, and technical papers to understand the validity, and impact of these reports. We also use this as an opportunity to contextualize the realized versus promised fruits of "AI".<em></em></p><p><br></p><ul><li>(00:23) - Hot off the press</li>
<li>(01:49) - The end of scaling</li>
<li>(10:50) - "Useful tools" and "agentic" "AI"</li>
<li>(17:19) - The end of quantization</li>
<li>(25:18) - Hedging</li>
<li>(29:41) - The end of upwards mobility</li>
<li>(33:12) - How to grow an economy</li>
<li>(38:14) - Transformative &amp; disruptive tech</li>
<li>(49:19) - Finding the meaning</li>
<li>(56:14) - Bursting AI bubble and Trump</li>
<li>(01:00:58) - The muck</li>
</ul><em><p></p></em><strong><br>Links</strong><ul><li>The Information <a href="https://www.theinformation.com/articles/openai-shifts-strategy-as-rate-of-gpt-ai-improvements-slows?rc=tgppn6">article</a> - OpenAI Shifts Strategy as Rate of ‘GPT’ AI Improvements Slows</li><li>Bloomberg [article] - OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI</li><li>Reuters <a href="https://www.reuters.com/technology/artificial-intelligence/openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11/">article</a> - OpenAI and others seek new path to smarter AI as current methods hit limitations</li><li>Paper on the end of quantization - <a href="https://arxiv.org/abs/2411.04330">Scaling Laws for Precision</a></li><li>Tim Dettmers <a href="https://x.com/Tim_Dettmers/status/1856338240099221674">Tweet</a> on "Scaling Laws for Precision"</li></ul><p><strong>Empirical Analysis</strong></p><ul><li>WU Vienna <a href="https://research.wu.ac.at/en/publications/unslicing-the-pie-ai-innovation-and-the-labor-share-in-european-r">paper</a> - Unslicing the pie: AI innovation and the labor share in European regions</li><li>IMF <a href="https://www.imf.org/en/Publications/WP/Issues/2024/09/13/The-Labor-Market-Impact-of-Artificial-Intelligence-Evidence-from-US-Regions-554845">paper</a> - The Labor Market Impact of Artificial Intelligence: Evidence from US Regions</li><li>NBER <a href="https://www.nber.org/papers/w32655">paper</a> - Automation, Career Values, and Political Preferences</li><li>Pew Research Center <a href="https://www.pewresearch.org/social-trends/2023/07/26/which-u-s-workers-are-more-exposed-to-ai-on-their-jobs/">report</a> - Which U.S. Workers Are More Exposed to AI on Their Jobs?</li></ul><p><strong>Forecasting</strong></p><ul><li>NBER/Acemoglu <a href="https://www.nber.org/papers/w32487">paper</a> - The Simple Macroeconomics of AI</li><li>NBER/Acemoglu <a href="https://www.nber.org/papers/w29247">paper</a> - Harms of AI</li><li>IMF <a href="https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379?cid=bl-com-SDNEA2024001">report</a> - Gen-AI: Artificial Intelligence and the Future of Work</li><li><a href="https://arxiv.org/abs/2306.02519">Submission</a> to Open Philanthropy AI Worldviews Contest - Transformative AGI by 2043 is &lt;1% likely</li></ul><p><strong>Externalities and the Bursting Bubble</strong></p><ul><li>NBER <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=226909">paper</a> - Bubbles, Rational Expectations and Financial Markets</li><li>Clayton Christensen <a href="https://www.youtube.com/watch?v=rpkoCZ4vBSI">lecture capture</a> - Clayton Christensen: Disruptive innovation</li><li>The New Republic <a href="https://newrepublic.com/article/187203/ai-radiology-geoffrey-hinton-nobel-prediction">article</a> - The “Godfather of AI” Predicted I Wouldn’t Have a Job. He Was Wrong.</li><li>Latent Space <a href="https://www.latent.space/p/gpu-bubble">article</a> - $2 H100s: How the GPU Rental Bubble Burst</li></ul><p><strong>On Productization</strong></p><ul><li>Palantir <a href="https://investors.palantir.com/news-details/2024/Anthropic-and-Palantir-Partner-to-Bring-Claude-AI-Models-to-AWS-for-U.S.-Government-Intelligence-and-Defense-Operations/">press release</a> on introduction of Claude to US security and defense</li><li>Ars Technica <a href="https://arstechnica.com/ai/2024/11/safe-ai-champ-anthropic-teams-up-with-defense-giant-palantir-in-new-deal/">article</a> - Claude AI to process secret government data through new Palantir deal</li><li>OpenAI <a href="https://openai.com/index/conde-nast/">press release</a> on partnering with Condé Nast</li><li>Candid Technology <a href="https://candid.technology/shutterstock-getty-images-partner-openai-bria/">article</a> - Shutterstock and Getty partner with OpenAI and BRIA</li><li><a href="https://e2b.dev/">E2B</a></li><li><a href="https://docs.stripe.com/agents">Stripe agents</a></li><li><a href="https://robopair.org/">Robopair</a></li></ul><p><strong>Other Sources</strong></p><ul><li>CBS News <a href="https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-human-please-die/">article</a> - Google AI chatbot responds with a threatening message: "Human … Please die."</li><li>Biometric Update <a href="https://www.biometricupdate.com/202406/travelers-to-eu-may-be-subjected-to-ai-lie-detector">article</a> - Travelers to EU may be subjected to AI lie detector</li><li>Techcrunch <a href="https://techcrunch.com/2024/11/15/openais-tumultuous-early-years-revealed-in-emails-from-musk-altman-and-others/">article</a> - OpenAI’s tumultuous early years revealed in emails from Musk, Altman, and others</li><li>Richard Ngo <a href="https://x.com/RichardMCNgo/status/1856843040427839804">Tweet</a> on leaving OpenAI</li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Multiple news outlets, including The Information, Bloomberg, and Reuters [see sources] are reporting an "end of scaling" for the current AI paradigm. In this episode we look into these articles, as well as a wide variety of economic forecasting, empirical analysis, and technical papers to understand the validity, and impact of these reports. We also use this as an opportunity to contextualize the realized versus promised fruits of "AI".<em></em></p><p><br></p><ul><li>(00:23) - Hot off the press</li>
<li>(01:49) - The end of scaling</li>
<li>(10:50) - "Useful tools" and "agentic" "AI"</li>
<li>(17:19) - The end of quantization</li>
<li>(25:18) - Hedging</li>
<li>(29:41) - The end of upwards mobility</li>
<li>(33:12) - How to grow an economy</li>
<li>(38:14) - Transformative &amp; disruptive tech</li>
<li>(49:19) - Finding the meaning</li>
<li>(56:14) - Bursting AI bubble and Trump</li>
<li>(01:00:58) - The muck</li>
</ul><em><p></p></em><strong><br>Links</strong><ul><li>The Information <a href="https://www.theinformation.com/articles/openai-shifts-strategy-as-rate-of-gpt-ai-improvements-slows?rc=tgppn6">article</a> - OpenAI Shifts Strategy as Rate of ‘GPT’ AI Improvements Slows</li><li>Bloomberg [article] - OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI</li><li>Reuters <a href="https://www.reuters.com/technology/artificial-intelligence/openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11/">article</a> - OpenAI and others seek new path to smarter AI as current methods hit limitations</li><li>Paper on the end of quantization - <a href="https://arxiv.org/abs/2411.04330">Scaling Laws for Precision</a></li><li>Tim Dettmers <a href="https://x.com/Tim_Dettmers/status/1856338240099221674">Tweet</a> on "Scaling Laws for Precision"</li></ul><p><strong>Empirical Analysis</strong></p><ul><li>WU Vienna <a href="https://research.wu.ac.at/en/publications/unslicing-the-pie-ai-innovation-and-the-labor-share-in-european-r">paper</a> - Unslicing the pie: AI innovation and the labor share in European regions</li><li>IMF <a href="https://www.imf.org/en/Publications/WP/Issues/2024/09/13/The-Labor-Market-Impact-of-Artificial-Intelligence-Evidence-from-US-Regions-554845">paper</a> - The Labor Market Impact of Artificial Intelligence: Evidence from US Regions</li><li>NBER <a href="https://www.nber.org/papers/w32655">paper</a> - Automation, Career Values, and Political Preferences</li><li>Pew Research Center <a href="https://www.pewresearch.org/social-trends/2023/07/26/which-u-s-workers-are-more-exposed-to-ai-on-their-jobs/">report</a> - Which U.S. Workers Are More Exposed to AI on Their Jobs?</li></ul><p><strong>Forecasting</strong></p><ul><li>NBER/Acemoglu <a href="https://www.nber.org/papers/w32487">paper</a> - The Simple Macroeconomics of AI</li><li>NBER/Acemoglu <a href="https://www.nber.org/papers/w29247">paper</a> - Harms of AI</li><li>IMF <a href="https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379?cid=bl-com-SDNEA2024001">report</a> - Gen-AI: Artificial Intelligence and the Future of Work</li><li><a href="https://arxiv.org/abs/2306.02519">Submission</a> to Open Philanthropy AI Worldviews Contest - Transformative AGI by 2043 is &lt;1% likely</li></ul><p><strong>Externalities and the Bursting Bubble</strong></p><ul><li>NBER <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=226909">paper</a> - Bubbles, Rational Expectations and Financial Markets</li><li>Clayton Christensen <a href="https://www.youtube.com/watch?v=rpkoCZ4vBSI">lecture capture</a> - Clayton Christensen: Disruptive innovation</li><li>The New Republic <a href="https://newrepublic.com/article/187203/ai-radiology-geoffrey-hinton-nobel-prediction">article</a> - The “Godfather of AI” Predicted I Wouldn’t Have a Job. He Was Wrong.</li><li>Latent Space <a href="https://www.latent.space/p/gpu-bubble">article</a> - $2 H100s: How the GPU Rental Bubble Burst</li></ul><p><strong>On Productization</strong></p><ul><li>Palantir <a href="https://investors.palantir.com/news-details/2024/Anthropic-and-Palantir-Partner-to-Bring-Claude-AI-Models-to-AWS-for-U.S.-Government-Intelligence-and-Defense-Operations/">press release</a> on introduction of Claude to US security and defense</li><li>Ars Technica <a href="https://arstechnica.com/ai/2024/11/safe-ai-champ-anthropic-teams-up-with-defense-giant-palantir-in-new-deal/">article</a> - Claude AI to process secret government data through new Palantir deal</li><li>OpenAI <a href="https://openai.com/index/conde-nast/">press release</a> on partnering with Condé Nast</li><li>Candid Technology <a href="https://candid.technology/shutterstock-getty-images-partner-openai-bria/">article</a> - Shutterstock and Getty partner with OpenAI and BRIA</li><li><a href="https://e2b.dev/">E2B</a></li><li><a href="https://docs.stripe.com/agents">Stripe agents</a></li><li><a href="https://robopair.org/">Robopair</a></li></ul><p><strong>Other Sources</strong></p><ul><li>CBS News <a href="https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-human-please-die/">article</a> - Google AI chatbot responds with a threatening message: "Human … Please die."</li><li>Biometric Update <a href="https://www.biometricupdate.com/202406/travelers-to-eu-may-be-subjected-to-ai-lie-detector">article</a> - Travelers to EU may be subjected to AI lie detector</li><li>Techcrunch <a href="https://techcrunch.com/2024/11/15/openais-tumultuous-early-years-revealed-in-emails-from-musk-altman-and-others/">article</a> - OpenAI’s tumultuous early years revealed in emails from Musk, Altman, and others</li><li>Richard Ngo <a href="https://x.com/RichardMCNgo/status/1856843040427839804">Tweet</a> on leaving OpenAI</li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Tue, 19 Nov 2024 09:00:00 -0700</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/48619337/9232c26b.mp3" length="50159688" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>4020</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Multiple news outlets, including The Information, Bloomberg, and Reuters [see sources] are reporting an "end of scaling" for the current AI paradigm. In this episode we look into these articles, as well as a wide variety of economic forecasting, empirical analysis, and technical papers to understand the validity, and impact of these reports. We also use this as an opportunity to contextualize the realized versus promised fruits of "AI".<em></em></p><p><br></p><ul><li>(00:23) - Hot off the press</li>
<li>(01:49) - The end of scaling</li>
<li>(10:50) - "Useful tools" and "agentic" "AI"</li>
<li>(17:19) - The end of quantization</li>
<li>(25:18) - Hedging</li>
<li>(29:41) - The end of upwards mobility</li>
<li>(33:12) - How to grow an economy</li>
<li>(38:14) - Transformative &amp; disruptive tech</li>
<li>(49:19) - Finding the meaning</li>
<li>(56:14) - Bursting AI bubble and Trump</li>
<li>(01:00:58) - The muck</li>
</ul><em><p></p></em><strong><br>Links</strong><ul><li>The Information <a href="https://www.theinformation.com/articles/openai-shifts-strategy-as-rate-of-gpt-ai-improvements-slows?rc=tgppn6">article</a> - OpenAI Shifts Strategy as Rate of ‘GPT’ AI Improvements Slows</li><li>Bloomberg [article] - OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI</li><li>Reuters <a href="https://www.reuters.com/technology/artificial-intelligence/openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11/">article</a> - OpenAI and others seek new path to smarter AI as current methods hit limitations</li><li>Paper on the end of quantization - <a href="https://arxiv.org/abs/2411.04330">Scaling Laws for Precision</a></li><li>Tim Dettmers <a href="https://x.com/Tim_Dettmers/status/1856338240099221674">Tweet</a> on "Scaling Laws for Precision"</li></ul><p><strong>Empirical Analysis</strong></p><ul><li>WU Vienna <a href="https://research.wu.ac.at/en/publications/unslicing-the-pie-ai-innovation-and-the-labor-share-in-european-r">paper</a> - Unslicing the pie: AI innovation and the labor share in European regions</li><li>IMF <a href="https://www.imf.org/en/Publications/WP/Issues/2024/09/13/The-Labor-Market-Impact-of-Artificial-Intelligence-Evidence-from-US-Regions-554845">paper</a> - The Labor Market Impact of Artificial Intelligence: Evidence from US Regions</li><li>NBER <a href="https://www.nber.org/papers/w32655">paper</a> - Automation, Career Values, and Political Preferences</li><li>Pew Research Center <a href="https://www.pewresearch.org/social-trends/2023/07/26/which-u-s-workers-are-more-exposed-to-ai-on-their-jobs/">report</a> - Which U.S. Workers Are More Exposed to AI on Their Jobs?</li></ul><p><strong>Forecasting</strong></p><ul><li>NBER/Acemoglu <a href="https://www.nber.org/papers/w32487">paper</a> - The Simple Macroeconomics of AI</li><li>NBER/Acemoglu <a href="https://www.nber.org/papers/w29247">paper</a> - Harms of AI</li><li>IMF <a href="https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379?cid=bl-com-SDNEA2024001">report</a> - Gen-AI: Artificial Intelligence and the Future of Work</li><li><a href="https://arxiv.org/abs/2306.02519">Submission</a> to Open Philanthropy AI Worldviews Contest - Transformative AGI by 2043 is &lt;1% likely</li></ul><p><strong>Externalities and the Bursting Bubble</strong></p><ul><li>NBER <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=226909">paper</a> - Bubbles, Rational Expectations and Financial Markets</li><li>Clayton Christensen <a href="https://www.youtube.com/watch?v=rpkoCZ4vBSI">lecture capture</a> - Clayton Christensen: Disruptive innovation</li><li>The New Republic <a href="https://newrepublic.com/article/187203/ai-radiology-geoffrey-hinton-nobel-prediction">article</a> - The “Godfather of AI” Predicted I Wouldn’t Have a Job. He Was Wrong.</li><li>Latent Space <a href="https://www.latent.space/p/gpu-bubble">article</a> - $2 H100s: How the GPU Rental Bubble Burst</li></ul><p><strong>On Productization</strong></p><ul><li>Palantir <a href="https://investors.palantir.com/news-details/2024/Anthropic-and-Palantir-Partner-to-Bring-Claude-AI-Models-to-AWS-for-U.S.-Government-Intelligence-and-Defense-Operations/">press release</a> on introduction of Claude to US security and defense</li><li>Ars Technica <a href="https://arstechnica.com/ai/2024/11/safe-ai-champ-anthropic-teams-up-with-defense-giant-palantir-in-new-deal/">article</a> - Claude AI to process secret government data through new Palantir deal</li><li>OpenAI <a href="https://openai.com/index/conde-nast/">press release</a> on partnering with Condé Nast</li><li>Candid Technology <a href="https://candid.technology/shutterstock-getty-images-partner-openai-bria/">article</a> - Shutterstock and Getty partner with OpenAI and BRIA</li><li><a href="https://e2b.dev/">E2B</a></li><li><a href="https://docs.stripe.com/agents">Stripe agents</a></li><li><a href="https://robopair.org/">Robopair</a></li></ul><p><strong>Other Sources</strong></p><ul><li>CBS News <a href="https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-human-please-die/">article</a> - Google AI chatbot responds with a threatening message: "Human … Please die."</li><li>Biometric Update <a href="https://www.biometricupdate.com/202406/travelers-to-eu-may-be-subjected-to-ai-lie-detector">article</a> - Travelers to EU may be subjected to AI lie detector</li><li>Techcrunch <a href="https://techcrunch.com/2024/11/15/openais-tumultuous-early-years-revealed-in-emails-from-musk-altman-and-others/">article</a> - OpenAI’s tumultuous early years revealed in emails from Musk, Altman, and others</li><li>Richard Ngo <a href="https://x.com/RichardMCNgo/status/1856843040427839804">Tweet</a> on leaving OpenAI</li></ul><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/48619337/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>US National Security Memorandum on AI, Oct 2024</title>
      <itunes:episode>6</itunes:episode>
      <podcast:episode>6</podcast:episode>
      <itunes:title>US National Security Memorandum on AI, Oct 2024</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fa9f5e3d-16b8-4a4b-acd2-215eba1b9729</guid>
      <link>https://kairos.fm/muckraikers/e006</link>
      <description>
        <![CDATA[<p>October 2024 saw a National Security Memorandum and US framework for using AI in national security contexts. We go through the content so you don't have to, pull out the important bits, and summarize our main takeaways.</p><p></p><ul><li>(00:48) - The memorandum</li>
<li>(06:28) - What the press is saying</li>
<li>(10:39) - What's in the text</li>
<li>(13:48) - Potential harms</li>
<li>(17:32) - Miscellaneous notable stuff</li>
<li>(31:11) - What's the US governments take on AI?</li>
<li>(45:45) - The civil side - comments on reporting</li>
<li>(49:31) - The commenters</li>
<li>(01:07:33) - Our final hero</li>
<li>(01:10:46) - The muck</li>
</ul><br><strong><br>Links</strong><ul><li><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/">United States National Security Memorandum on AI</a></li><li><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2024/10/24/fact-sheet-biden-harris-administration-outlines-coordinated-approach-to-harness-power-of-ai-for-u-s-national-security/">Fact Sheet on the National Security Memorandum</a></li><li><a href="https://ai.gov/wp-content/uploads/2024/10/NSM-Framework-to-Advance-AI-Governance-and-Risk-Management-in-National-Security.pdf">Framework to Advance AI Governance and Risk Management in National Security</a></li></ul><p><strong>Related Media</strong></p><ul><li>CAIS Newsletter - <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-43-white-house">AI Safety Newsletter #43</a></li><li>NIST report - <a href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf">Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile</a></li><li>ACLU press release - <a href="https://www.aclu.org/press-releases/aclu-warns-that-biden-harris-administration-rules-on-ai-in-national-security-lack-key-protections">ACLU Warns that Biden-Harris Administration Rules on AI in National Security Lack Key Protections</a></li><li>Wikipedia article - <a href="https://en.wikipedia.org/wiki/Presidential_memorandum">Presidential Memorandum</a></li><li>Reuters article - <a href="https://www.reuters.com/world/us/white-house-presses-govt-ai-use-with-eye-security-guardrails-2024-10-24/">White House presses gov't AI use with eye on security, guardrails</a></li><li>Forbes article - <a href="https://www.forbes.com/sites/jamesbroughel/2024/11/02/americas-ai-security-strategy-acknowledges-theres-no-stopping-ai/">America’s AI Security Strategy Acknowledges There’s No Stopping AI</a></li><li>DefenseScoop article - <a href="https://defensescoop.com/2024/10/24/national-security-memorandum-artificial-intelligence-dod-odni/">New White House directive prods DOD, intelligence agencies to move faster adopting AI capabilities</a></li><li>NYTimes article - <a href="https://www.nytimes.com/2024/10/24/us/politics/biden-government-guidelines-ai.html">Biden Administration Outlines Government ‘Guardrails’ for A.I. Tools</a></li><li>Forbes article - <a href="https://www.forbes.com/sites/johnwerner/2024/10/30/5-things-to-know-about-the-new-national-security-memorandum-on-ai--and-what-chatgpt-thinks/">5 Things To Know About The New National Security Memorandum On AI – And What ChatGPT Thinks</a></li><li>Federal News Network interview - <a href="https://federalnewsnetwork.com/artificial-intelligence/2024/10/a-look-inside-the-latest-white-house-artificial-intelligence-memo/">A look inside the latest White House artificial intelligence memo</a></li><li>Govtech article - <a href="https://www.govtech.com/artificial-intelligence/reactions-mostly-positive-to-national-security-ai-memo">Reactions Mostly Positive to National Security AI Memo</a></li><li>The Information article - <a href="https://www.theinformation.com/briefings/biden-memo-encourages-military-use-of-ai">Biden Memo Encourages Military Use of AI</a></li></ul><p><strong>Other Sources</strong></p><ul><li>Physical Intelligence press release - <a href="https://www.physicalintelligence.company/blog/pi0">π0: Our First Generalist Policy</a></li><li>OpenAI press release - <a href="https://openai.com/index/introducing-chatgpt-search/">Introducing ChatGPT Search</a></li><li><a href="https://www.whopooapp.com/">WhoPoo App</a>!!</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>October 2024 saw a National Security Memorandum and US framework for using AI in national security contexts. We go through the content so you don't have to, pull out the important bits, and summarize our main takeaways.</p><p></p><ul><li>(00:48) - The memorandum</li>
<li>(06:28) - What the press is saying</li>
<li>(10:39) - What's in the text</li>
<li>(13:48) - Potential harms</li>
<li>(17:32) - Miscellaneous notable stuff</li>
<li>(31:11) - What's the US governments take on AI?</li>
<li>(45:45) - The civil side - comments on reporting</li>
<li>(49:31) - The commenters</li>
<li>(01:07:33) - Our final hero</li>
<li>(01:10:46) - The muck</li>
</ul><br><strong><br>Links</strong><ul><li><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/">United States National Security Memorandum on AI</a></li><li><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2024/10/24/fact-sheet-biden-harris-administration-outlines-coordinated-approach-to-harness-power-of-ai-for-u-s-national-security/">Fact Sheet on the National Security Memorandum</a></li><li><a href="https://ai.gov/wp-content/uploads/2024/10/NSM-Framework-to-Advance-AI-Governance-and-Risk-Management-in-National-Security.pdf">Framework to Advance AI Governance and Risk Management in National Security</a></li></ul><p><strong>Related Media</strong></p><ul><li>CAIS Newsletter - <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-43-white-house">AI Safety Newsletter #43</a></li><li>NIST report - <a href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf">Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile</a></li><li>ACLU press release - <a href="https://www.aclu.org/press-releases/aclu-warns-that-biden-harris-administration-rules-on-ai-in-national-security-lack-key-protections">ACLU Warns that Biden-Harris Administration Rules on AI in National Security Lack Key Protections</a></li><li>Wikipedia article - <a href="https://en.wikipedia.org/wiki/Presidential_memorandum">Presidential Memorandum</a></li><li>Reuters article - <a href="https://www.reuters.com/world/us/white-house-presses-govt-ai-use-with-eye-security-guardrails-2024-10-24/">White House presses gov't AI use with eye on security, guardrails</a></li><li>Forbes article - <a href="https://www.forbes.com/sites/jamesbroughel/2024/11/02/americas-ai-security-strategy-acknowledges-theres-no-stopping-ai/">America’s AI Security Strategy Acknowledges There’s No Stopping AI</a></li><li>DefenseScoop article - <a href="https://defensescoop.com/2024/10/24/national-security-memorandum-artificial-intelligence-dod-odni/">New White House directive prods DOD, intelligence agencies to move faster adopting AI capabilities</a></li><li>NYTimes article - <a href="https://www.nytimes.com/2024/10/24/us/politics/biden-government-guidelines-ai.html">Biden Administration Outlines Government ‘Guardrails’ for A.I. Tools</a></li><li>Forbes article - <a href="https://www.forbes.com/sites/johnwerner/2024/10/30/5-things-to-know-about-the-new-national-security-memorandum-on-ai--and-what-chatgpt-thinks/">5 Things To Know About The New National Security Memorandum On AI – And What ChatGPT Thinks</a></li><li>Federal News Network interview - <a href="https://federalnewsnetwork.com/artificial-intelligence/2024/10/a-look-inside-the-latest-white-house-artificial-intelligence-memo/">A look inside the latest White House artificial intelligence memo</a></li><li>Govtech article - <a href="https://www.govtech.com/artificial-intelligence/reactions-mostly-positive-to-national-security-ai-memo">Reactions Mostly Positive to National Security AI Memo</a></li><li>The Information article - <a href="https://www.theinformation.com/briefings/biden-memo-encourages-military-use-of-ai">Biden Memo Encourages Military Use of AI</a></li></ul><p><strong>Other Sources</strong></p><ul><li>Physical Intelligence press release - <a href="https://www.physicalintelligence.company/blog/pi0">π0: Our First Generalist Policy</a></li><li>OpenAI press release - <a href="https://openai.com/index/introducing-chatgpt-search/">Introducing ChatGPT Search</a></li><li><a href="https://www.whopooapp.com/">WhoPoo App</a>!!</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 06 Nov 2024 09:00:00 -0700</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/5557694c/8244085b.mp3" length="55959641" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>4589</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>October 2024 saw a National Security Memorandum and US framework for using AI in national security contexts. We go through the content so you don't have to, pull out the important bits, and summarize our main takeaways.</p><p></p><ul><li>(00:48) - The memorandum</li>
<li>(06:28) - What the press is saying</li>
<li>(10:39) - What's in the text</li>
<li>(13:48) - Potential harms</li>
<li>(17:32) - Miscellaneous notable stuff</li>
<li>(31:11) - What's the US governments take on AI?</li>
<li>(45:45) - The civil side - comments on reporting</li>
<li>(49:31) - The commenters</li>
<li>(01:07:33) - Our final hero</li>
<li>(01:10:46) - The muck</li>
</ul><br><strong><br>Links</strong><ul><li><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/">United States National Security Memorandum on AI</a></li><li><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2024/10/24/fact-sheet-biden-harris-administration-outlines-coordinated-approach-to-harness-power-of-ai-for-u-s-national-security/">Fact Sheet on the National Security Memorandum</a></li><li><a href="https://ai.gov/wp-content/uploads/2024/10/NSM-Framework-to-Advance-AI-Governance-and-Risk-Management-in-National-Security.pdf">Framework to Advance AI Governance and Risk Management in National Security</a></li></ul><p><strong>Related Media</strong></p><ul><li>CAIS Newsletter - <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-43-white-house">AI Safety Newsletter #43</a></li><li>NIST report - <a href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf">Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile</a></li><li>ACLU press release - <a href="https://www.aclu.org/press-releases/aclu-warns-that-biden-harris-administration-rules-on-ai-in-national-security-lack-key-protections">ACLU Warns that Biden-Harris Administration Rules on AI in National Security Lack Key Protections</a></li><li>Wikipedia article - <a href="https://en.wikipedia.org/wiki/Presidential_memorandum">Presidential Memorandum</a></li><li>Reuters article - <a href="https://www.reuters.com/world/us/white-house-presses-govt-ai-use-with-eye-security-guardrails-2024-10-24/">White House presses gov't AI use with eye on security, guardrails</a></li><li>Forbes article - <a href="https://www.forbes.com/sites/jamesbroughel/2024/11/02/americas-ai-security-strategy-acknowledges-theres-no-stopping-ai/">America’s AI Security Strategy Acknowledges There’s No Stopping AI</a></li><li>DefenseScoop article - <a href="https://defensescoop.com/2024/10/24/national-security-memorandum-artificial-intelligence-dod-odni/">New White House directive prods DOD, intelligence agencies to move faster adopting AI capabilities</a></li><li>NYTimes article - <a href="https://www.nytimes.com/2024/10/24/us/politics/biden-government-guidelines-ai.html">Biden Administration Outlines Government ‘Guardrails’ for A.I. Tools</a></li><li>Forbes article - <a href="https://www.forbes.com/sites/johnwerner/2024/10/30/5-things-to-know-about-the-new-national-security-memorandum-on-ai--and-what-chatgpt-thinks/">5 Things To Know About The New National Security Memorandum On AI – And What ChatGPT Thinks</a></li><li>Federal News Network interview - <a href="https://federalnewsnetwork.com/artificial-intelligence/2024/10/a-look-inside-the-latest-white-house-artificial-intelligence-memo/">A look inside the latest White House artificial intelligence memo</a></li><li>Govtech article - <a href="https://www.govtech.com/artificial-intelligence/reactions-mostly-positive-to-national-security-ai-memo">Reactions Mostly Positive to National Security AI Memo</a></li><li>The Information article - <a href="https://www.theinformation.com/briefings/biden-memo-encourages-military-use-of-ai">Biden Memo Encourages Military Use of AI</a></li></ul><p><strong>Other Sources</strong></p><ul><li>Physical Intelligence press release - <a href="https://www.physicalintelligence.company/blog/pi0">π0: Our First Generalist Policy</a></li><li>OpenAI press release - <a href="https://openai.com/index/introducing-chatgpt-search/">Introducing ChatGPT Search</a></li><li><a href="https://www.whopooapp.com/">WhoPoo App</a>!!</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, AI governance, United States, national security</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/5557694c/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Understanding Claude 3.5 Sonnet (New)</title>
      <itunes:episode>5</itunes:episode>
      <podcast:episode>5</podcast:episode>
      <itunes:title>Understanding Claude 3.5 Sonnet (New)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">cdcc34b9-a80d-4c3a-85fc-9e26c452d091</guid>
      <link>https://kairos.fm/muckraikers/e005</link>
      <description>
        <![CDATA[<p>Frontier developers continue their war on sane versioning schema to bring us Claude 3.5 Sonnet (New), along with "computer use" capabilities. We discuss not only the new model, but also why Anthropic may have released this model and tool combination now.</p><p><br></p><ul><li>(00:00) - Intro</li>
<li>(00:22) - Hot off the press</li>
<li>(05:03) - Claude 3.5 Sonnet (New) Two 'o' 3000</li>
<li>(09:23) - Breaking down "computer use"</li>
<li>(13:16) - Our understanding</li>
<li>(16:03) - Diverging business models</li>
<li>(32:07) - Why has Anthropic chosen this strategy?</li>
<li>(43:14) - Changing the frame</li>
<li>(48:00) - Polishing the lily</li>
</ul><p><strong>Links</strong></p><ul><li>Anthropic press release - <a href="https://www.anthropic.com/news/3-5-models-and-computer-use">Introducing Claude 3.5 Sonnet (New)</a></li><li><a href="https://assets.anthropic.com/m/61e7d27f8c8f5919/original/Claude-3-Model-Card.pdf#page=51.12">Model Card Addendum</a></li></ul><p><strong>Other Anthropic Relevant Media</strong></p><ul><li>Paper - <a href="https://assets.anthropic.com/m/377027d5b36ac1eb/original/Sabotage-Evaluations-for-Frontier-Models.pdf">Sabotage Evaluations for Frontier Models</a></li><li>Anthropic press release - <a href="https://www.anthropic.com/rsp-updates">Anthropic's Updated RSP</a></li><li>Alignment Forum blogpost - <a href="https://www.alignmentforum.org/posts/Q7caj7emnwWBxLECF/anthropic-s-updated-responsible-scaling-policy">Anthropic's Updated RSP</a></li><li>Tweet - <a href="https://x.com/catherineols/status/1849654577089364180">Response to scare regarding Anthropic training on user data</a></li><li>Anthropic press release - <a href="https://www.anthropic.com/news/developing-computer-use">Developing a computer use model</a></li><li>Simon Willison article - <a href="https://simonwillison.net/2024/Oct/22/computer-use/">Initial explorations of Anthropic’s new Computer Use capability</a></li><li>Tweet - <a href="https://x.com/arcprize/status/1849225898391933148">ARC Prize performance</a></li><li>The Information article - <a href="https://www.theinformation.com/articles/openai-rival-anthropic-has-floated-40-billion-valuation-in-early-talks-about-new-funding?rc=tgppn6">Anthropic Has Floated $40 Billion Valuation in Funding Talks</a></li></ul><p><strong>Other Sources</strong></p><ul><li>LWN.net article - <a href="https://lwn.net/SubscriberLink/995159/a37fb9817a00ebcb/">OSI readies controversial Open AI definition</a></li><li><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/">National Security Memorandum</a></li><li><a href="https://ai.gov/wp-content/uploads/2024/10/NSM-Framework-to-Advance-AI-Governance-and-Risk-Management-in-National-Security.pdf">Framework to Advance AI Governance and Risk Management in National Security</a></li><li>Reuters article - <a href="https://www.reuters.com/legal/mother-sues-ai-chatbot-company-characterai-google-sued-over-sons-suicide-2024-10-23/">Mother sues AI chatbot company Character.AI, Google over son's suicide</a></li><li>Medium article - <a href="https://medium.com/@peakji/a-small-step-towards-reproducing-openai-o1-b9a756a00855">A Small Step Towards Reproducing OpenAI o1: Progress Report on the Steiner Open Source Models</a></li><li>The Guardian article - <a href="https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people">Google's solution to accidental algorithmic racism: ban gorillas</a></li><li>TIME article - <a href="https://time.com/6836153/ethical-ai-google-gemini-debacle/">Ethical AI Isn’t to Blame for Google’s Gemini Debacle</a></li><li>Latacora article - <a href="https://www.latacora.com/blog/2020/03/12/the-soc-starting/">The SOC2 Starting Seven</a></li><li>Grandview Research market trends - <a href="https://www.grandviewresearch.com/industry-analysis/robotic-process-automation-rpa-market">Robotic Process Automation Market Trends</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Frontier developers continue their war on sane versioning schema to bring us Claude 3.5 Sonnet (New), along with "computer use" capabilities. We discuss not only the new model, but also why Anthropic may have released this model and tool combination now.</p><p><br></p><ul><li>(00:00) - Intro</li>
<li>(00:22) - Hot off the press</li>
<li>(05:03) - Claude 3.5 Sonnet (New) Two 'o' 3000</li>
<li>(09:23) - Breaking down "computer use"</li>
<li>(13:16) - Our understanding</li>
<li>(16:03) - Diverging business models</li>
<li>(32:07) - Why has Anthropic chosen this strategy?</li>
<li>(43:14) - Changing the frame</li>
<li>(48:00) - Polishing the lily</li>
</ul><p><strong>Links</strong></p><ul><li>Anthropic press release - <a href="https://www.anthropic.com/news/3-5-models-and-computer-use">Introducing Claude 3.5 Sonnet (New)</a></li><li><a href="https://assets.anthropic.com/m/61e7d27f8c8f5919/original/Claude-3-Model-Card.pdf#page=51.12">Model Card Addendum</a></li></ul><p><strong>Other Anthropic Relevant Media</strong></p><ul><li>Paper - <a href="https://assets.anthropic.com/m/377027d5b36ac1eb/original/Sabotage-Evaluations-for-Frontier-Models.pdf">Sabotage Evaluations for Frontier Models</a></li><li>Anthropic press release - <a href="https://www.anthropic.com/rsp-updates">Anthropic's Updated RSP</a></li><li>Alignment Forum blogpost - <a href="https://www.alignmentforum.org/posts/Q7caj7emnwWBxLECF/anthropic-s-updated-responsible-scaling-policy">Anthropic's Updated RSP</a></li><li>Tweet - <a href="https://x.com/catherineols/status/1849654577089364180">Response to scare regarding Anthropic training on user data</a></li><li>Anthropic press release - <a href="https://www.anthropic.com/news/developing-computer-use">Developing a computer use model</a></li><li>Simon Willison article - <a href="https://simonwillison.net/2024/Oct/22/computer-use/">Initial explorations of Anthropic’s new Computer Use capability</a></li><li>Tweet - <a href="https://x.com/arcprize/status/1849225898391933148">ARC Prize performance</a></li><li>The Information article - <a href="https://www.theinformation.com/articles/openai-rival-anthropic-has-floated-40-billion-valuation-in-early-talks-about-new-funding?rc=tgppn6">Anthropic Has Floated $40 Billion Valuation in Funding Talks</a></li></ul><p><strong>Other Sources</strong></p><ul><li>LWN.net article - <a href="https://lwn.net/SubscriberLink/995159/a37fb9817a00ebcb/">OSI readies controversial Open AI definition</a></li><li><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/">National Security Memorandum</a></li><li><a href="https://ai.gov/wp-content/uploads/2024/10/NSM-Framework-to-Advance-AI-Governance-and-Risk-Management-in-National-Security.pdf">Framework to Advance AI Governance and Risk Management in National Security</a></li><li>Reuters article - <a href="https://www.reuters.com/legal/mother-sues-ai-chatbot-company-characterai-google-sued-over-sons-suicide-2024-10-23/">Mother sues AI chatbot company Character.AI, Google over son's suicide</a></li><li>Medium article - <a href="https://medium.com/@peakji/a-small-step-towards-reproducing-openai-o1-b9a756a00855">A Small Step Towards Reproducing OpenAI o1: Progress Report on the Steiner Open Source Models</a></li><li>The Guardian article - <a href="https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people">Google's solution to accidental algorithmic racism: ban gorillas</a></li><li>TIME article - <a href="https://time.com/6836153/ethical-ai-google-gemini-debacle/">Ethical AI Isn’t to Blame for Google’s Gemini Debacle</a></li><li>Latacora article - <a href="https://www.latacora.com/blog/2020/03/12/the-soc-starting/">The SOC2 Starting Seven</a></li><li>Grandview Research market trends - <a href="https://www.grandviewresearch.com/industry-analysis/robotic-process-automation-rpa-market">Robotic Process Automation Market Trends</a></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 30 Oct 2024 10:00:00 -0600</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/9e6184c3/79967e11.mp3" length="42908388" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>3654</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Frontier developers continue their war on sane versioning schema to bring us Claude 3.5 Sonnet (New), along with "computer use" capabilities. We discuss not only the new model, but also why Anthropic may have released this model and tool combination now.</p><p><br></p><ul><li>(00:00) - Intro</li>
<li>(00:22) - Hot off the press</li>
<li>(05:03) - Claude 3.5 Sonnet (New) Two 'o' 3000</li>
<li>(09:23) - Breaking down "computer use"</li>
<li>(13:16) - Our understanding</li>
<li>(16:03) - Diverging business models</li>
<li>(32:07) - Why has Anthropic chosen this strategy?</li>
<li>(43:14) - Changing the frame</li>
<li>(48:00) - Polishing the lily</li>
</ul><p><strong>Links</strong></p><ul><li>Anthropic press release - <a href="https://www.anthropic.com/news/3-5-models-and-computer-use">Introducing Claude 3.5 Sonnet (New)</a></li><li><a href="https://assets.anthropic.com/m/61e7d27f8c8f5919/original/Claude-3-Model-Card.pdf#page=51.12">Model Card Addendum</a></li></ul><p><strong>Other Anthropic Relevant Media</strong></p><ul><li>Paper - <a href="https://assets.anthropic.com/m/377027d5b36ac1eb/original/Sabotage-Evaluations-for-Frontier-Models.pdf">Sabotage Evaluations for Frontier Models</a></li><li>Anthropic press release - <a href="https://www.anthropic.com/rsp-updates">Anthropic's Updated RSP</a></li><li>Alignment Forum blogpost - <a href="https://www.alignmentforum.org/posts/Q7caj7emnwWBxLECF/anthropic-s-updated-responsible-scaling-policy">Anthropic's Updated RSP</a></li><li>Tweet - <a href="https://x.com/catherineols/status/1849654577089364180">Response to scare regarding Anthropic training on user data</a></li><li>Anthropic press release - <a href="https://www.anthropic.com/news/developing-computer-use">Developing a computer use model</a></li><li>Simon Willison article - <a href="https://simonwillison.net/2024/Oct/22/computer-use/">Initial explorations of Anthropic’s new Computer Use capability</a></li><li>Tweet - <a href="https://x.com/arcprize/status/1849225898391933148">ARC Prize performance</a></li><li>The Information article - <a href="https://www.theinformation.com/articles/openai-rival-anthropic-has-floated-40-billion-valuation-in-early-talks-about-new-funding?rc=tgppn6">Anthropic Has Floated $40 Billion Valuation in Funding Talks</a></li></ul><p><strong>Other Sources</strong></p><ul><li>LWN.net article - <a href="https://lwn.net/SubscriberLink/995159/a37fb9817a00ebcb/">OSI readies controversial Open AI definition</a></li><li><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/">National Security Memorandum</a></li><li><a href="https://ai.gov/wp-content/uploads/2024/10/NSM-Framework-to-Advance-AI-Governance-and-Risk-Management-in-National-Security.pdf">Framework to Advance AI Governance and Risk Management in National Security</a></li><li>Reuters article - <a href="https://www.reuters.com/legal/mother-sues-ai-chatbot-company-characterai-google-sued-over-sons-suicide-2024-10-23/">Mother sues AI chatbot company Character.AI, Google over son's suicide</a></li><li>Medium article - <a href="https://medium.com/@peakji/a-small-step-towards-reproducing-openai-o1-b9a756a00855">A Small Step Towards Reproducing OpenAI o1: Progress Report on the Steiner Open Source Models</a></li><li>The Guardian article - <a href="https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people">Google's solution to accidental algorithmic racism: ban gorillas</a></li><li>TIME article - <a href="https://time.com/6836153/ethical-ai-google-gemini-debacle/">Ethical AI Isn’t to Blame for Google’s Gemini Debacle</a></li><li>Latacora article - <a href="https://www.latacora.com/blog/2020/03/12/the-soc-starting/">The SOC2 Starting Seven</a></li><li>Grandview Research market trends - <a href="https://www.grandviewresearch.com/industry-analysis/robotic-process-automation-rpa-market">Robotic Process Automation Market Trends</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, Claude, Anthropic</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/9e6184c3/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Winter is Coming for OpenAI</title>
      <itunes:episode>4</itunes:episode>
      <podcast:episode>4</podcast:episode>
      <itunes:title>Winter is Coming for OpenAI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fa206521-d22b-41ae-9904-57c76db2dc42</guid>
      <link>https://kairos.fm/muckraikers/e004</link>
      <description>
        <![CDATA[<p>Brace yourselves, winter is coming for OpenAI - atleast, that's what we think. In this episode we look at OpenAI's recent massive funding round and ask "why would anyone want to fund a company that is set to lose net 5 billion USD for 2024?" We scrape through a whole lot of muck to find the meaningful signals in all this news, and there is a lot of it, so get ready!</p><p><br></p><ul><li>(00:00) - Intro</li>
<li>(00:28) - Hot off the press</li>
<li>(02:43) - Why listen?</li>
<li>(06:07) - Why might VCs invest?</li>
<li>(15:52) - What are people saying</li>
<li>(23:10) - How *is* OpenAI making money?</li>
<li>(28:18) - Is AI hype dying?</li>
<li>(41:08) - Why might big companies invest?</li>
<li>(48:47) - Concrete impacts of AI</li>
<li>(52:37) - Outcome 1: OpenAI as a commodity</li>
<li>(01:04:02) - Outcome 2: AGI</li>
<li>(01:04:42) - Outcome 3: best plausible case</li>
<li>(01:07:53) - Outcome 1*: many ways to bust</li>
<li>(01:10:51) - Outcome 4+: shock factor</li>
<li>(01:12:51) - What's the muck</li>
<li>(01:21:17) - Extended outro</li>
</ul><strong><p>Links</p></strong><ul><li>Reuters article - <a href="https://www.reuters.com/technology/artificial-intelligence/openai-closes-66-billion-funding-haul-valuation-157-billion-with-investment-2024-10-02/">OpenAI closes $6.6 billion funding haul with investment from Microsoft and Nvidia</a></li><li>Goldman Sachs report - <a href="https://www.goldmansachs.com/images/migrated/insights/pages/gs-research/gen-ai--too-much-spend%2C-too-little-benefit-/TOM_AI%202.0_ForRedaction.pdf">GenAI: Too Much Spend, Too Little Benefit</a></li><li>Apricitas Economics article - <a href="https://www.apricitas.io/p/the-ai-investment-boom">The AI Investment Boom</a></li><li>Discussion of "The AI Investment Boom" on <a href="https://news.ycombinator.com/item?id=41895746">YCombinator</a></li><li><a href="https://hai.stanford.edu/news/ai-index-state-ai-13-charts">State of AI in 13 Charts</a></li><li>Fortune article - <a href="https://finance.yahoo.com/news/openai-sees-5-billion-loss-170306927.html">OpenAI sees $5 billion loss in 2024 and soaring sales as big ChatGPT fee hikes planned, report says</a></li></ul><p><strong>More on AI Hype (Dying)</strong></p><ul><li>Latent Space article - <a href="https://www.latent.space/p/mar-jun-2024">The Winds of AI Winter</a></li><li>Article by Gary Marcus - <a href="https://garymarcus.substack.com/p/the-great-ai-retrenchment-has-begun">The Great AI Retrenchment has Begun</a></li><li>TimmermanReport article - <a href="https://timmermanreport.com/2024/07/ai-if-not-now-when-no-really-when/">AI: If Not Now, When? No, Really - When?</a></li><li>MIT News article - <a href="https://news.mit.edu/2023/who-will-benefit-ai-machine-usefulness-0929">Who Will Benefit from AI?</a></li><li>Washington Post article - <a href="https://www.washingtonpost.com/technology/2024/04/18/ai-bubble-hype-dying-money/">The AI Hype bubble is deflating. Now comes the hard part.</a></li><li>Andreesen Horowitz article - <a href="https://a16z.com/ai-will-save-the-world/">Why AI Will Save the World</a></li></ul><p><strong>Other Sources</strong></p><ul><li>Human-Centered Artificial Intelligence <a href="https://hai.stanford.edu/news/introducing-foundation-model-transparency-index">Foundation Model Transparency Index</a></li><li>Cointelegraph article - <a href="https://cointelegraph.com/news/europe-gathers-global-experts-to-draft-code-of-practice-for-gen-ai">Europe gathers global experts to draft ‘Code of Practice’ for AI</a></li><li>Reuters article - <a href="https://www.reuters.com/technology/microsofts-vp-genai-research-join-openai-2024-10-14/">Microsoft's VP of GenAI research to join OpenAI</a></li><li>Twitter post from <a href="https://x.com/_tim_brooks/status/1841982327431561528">Tim Brooks</a> on joining DeepMind</li><li>Edward Zitron article - <a href="https://www.wheresyoured.at/the-men-who-killed-google/">The Man Who Killed Google Search</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Brace yourselves, winter is coming for OpenAI - atleast, that's what we think. In this episode we look at OpenAI's recent massive funding round and ask "why would anyone want to fund a company that is set to lose net 5 billion USD for 2024?" We scrape through a whole lot of muck to find the meaningful signals in all this news, and there is a lot of it, so get ready!</p><p><br></p><ul><li>(00:00) - Intro</li>
<li>(00:28) - Hot off the press</li>
<li>(02:43) - Why listen?</li>
<li>(06:07) - Why might VCs invest?</li>
<li>(15:52) - What are people saying</li>
<li>(23:10) - How *is* OpenAI making money?</li>
<li>(28:18) - Is AI hype dying?</li>
<li>(41:08) - Why might big companies invest?</li>
<li>(48:47) - Concrete impacts of AI</li>
<li>(52:37) - Outcome 1: OpenAI as a commodity</li>
<li>(01:04:02) - Outcome 2: AGI</li>
<li>(01:04:42) - Outcome 3: best plausible case</li>
<li>(01:07:53) - Outcome 1*: many ways to bust</li>
<li>(01:10:51) - Outcome 4+: shock factor</li>
<li>(01:12:51) - What's the muck</li>
<li>(01:21:17) - Extended outro</li>
</ul><strong><p>Links</p></strong><ul><li>Reuters article - <a href="https://www.reuters.com/technology/artificial-intelligence/openai-closes-66-billion-funding-haul-valuation-157-billion-with-investment-2024-10-02/">OpenAI closes $6.6 billion funding haul with investment from Microsoft and Nvidia</a></li><li>Goldman Sachs report - <a href="https://www.goldmansachs.com/images/migrated/insights/pages/gs-research/gen-ai--too-much-spend%2C-too-little-benefit-/TOM_AI%202.0_ForRedaction.pdf">GenAI: Too Much Spend, Too Little Benefit</a></li><li>Apricitas Economics article - <a href="https://www.apricitas.io/p/the-ai-investment-boom">The AI Investment Boom</a></li><li>Discussion of "The AI Investment Boom" on <a href="https://news.ycombinator.com/item?id=41895746">YCombinator</a></li><li><a href="https://hai.stanford.edu/news/ai-index-state-ai-13-charts">State of AI in 13 Charts</a></li><li>Fortune article - <a href="https://finance.yahoo.com/news/openai-sees-5-billion-loss-170306927.html">OpenAI sees $5 billion loss in 2024 and soaring sales as big ChatGPT fee hikes planned, report says</a></li></ul><p><strong>More on AI Hype (Dying)</strong></p><ul><li>Latent Space article - <a href="https://www.latent.space/p/mar-jun-2024">The Winds of AI Winter</a></li><li>Article by Gary Marcus - <a href="https://garymarcus.substack.com/p/the-great-ai-retrenchment-has-begun">The Great AI Retrenchment has Begun</a></li><li>TimmermanReport article - <a href="https://timmermanreport.com/2024/07/ai-if-not-now-when-no-really-when/">AI: If Not Now, When? No, Really - When?</a></li><li>MIT News article - <a href="https://news.mit.edu/2023/who-will-benefit-ai-machine-usefulness-0929">Who Will Benefit from AI?</a></li><li>Washington Post article - <a href="https://www.washingtonpost.com/technology/2024/04/18/ai-bubble-hype-dying-money/">The AI Hype bubble is deflating. Now comes the hard part.</a></li><li>Andreesen Horowitz article - <a href="https://a16z.com/ai-will-save-the-world/">Why AI Will Save the World</a></li></ul><p><strong>Other Sources</strong></p><ul><li>Human-Centered Artificial Intelligence <a href="https://hai.stanford.edu/news/introducing-foundation-model-transparency-index">Foundation Model Transparency Index</a></li><li>Cointelegraph article - <a href="https://cointelegraph.com/news/europe-gathers-global-experts-to-draft-code-of-practice-for-gen-ai">Europe gathers global experts to draft ‘Code of Practice’ for AI</a></li><li>Reuters article - <a href="https://www.reuters.com/technology/microsofts-vp-genai-research-join-openai-2024-10-14/">Microsoft's VP of GenAI research to join OpenAI</a></li><li>Twitter post from <a href="https://x.com/_tim_brooks/status/1841982327431561528">Tim Brooks</a> on joining DeepMind</li><li>Edward Zitron article - <a href="https://www.wheresyoured.at/the-men-who-killed-google/">The Man Who Killed Google Search</a></li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 22 Oct 2024 09:00:00 -0600</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/fcc8770a/00dd5ab1.mp3" length="62809363" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>4957</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Brace yourselves, winter is coming for OpenAI - atleast, that's what we think. In this episode we look at OpenAI's recent massive funding round and ask "why would anyone want to fund a company that is set to lose net 5 billion USD for 2024?" We scrape through a whole lot of muck to find the meaningful signals in all this news, and there is a lot of it, so get ready!</p><p><br></p><ul><li>(00:00) - Intro</li>
<li>(00:28) - Hot off the press</li>
<li>(02:43) - Why listen?</li>
<li>(06:07) - Why might VCs invest?</li>
<li>(15:52) - What are people saying</li>
<li>(23:10) - How *is* OpenAI making money?</li>
<li>(28:18) - Is AI hype dying?</li>
<li>(41:08) - Why might big companies invest?</li>
<li>(48:47) - Concrete impacts of AI</li>
<li>(52:37) - Outcome 1: OpenAI as a commodity</li>
<li>(01:04:02) - Outcome 2: AGI</li>
<li>(01:04:42) - Outcome 3: best plausible case</li>
<li>(01:07:53) - Outcome 1*: many ways to bust</li>
<li>(01:10:51) - Outcome 4+: shock factor</li>
<li>(01:12:51) - What's the muck</li>
<li>(01:21:17) - Extended outro</li>
</ul><strong><p>Links</p></strong><ul><li>Reuters article - <a href="https://www.reuters.com/technology/artificial-intelligence/openai-closes-66-billion-funding-haul-valuation-157-billion-with-investment-2024-10-02/">OpenAI closes $6.6 billion funding haul with investment from Microsoft and Nvidia</a></li><li>Goldman Sachs report - <a href="https://www.goldmansachs.com/images/migrated/insights/pages/gs-research/gen-ai--too-much-spend%2C-too-little-benefit-/TOM_AI%202.0_ForRedaction.pdf">GenAI: Too Much Spend, Too Little Benefit</a></li><li>Apricitas Economics article - <a href="https://www.apricitas.io/p/the-ai-investment-boom">The AI Investment Boom</a></li><li>Discussion of "The AI Investment Boom" on <a href="https://news.ycombinator.com/item?id=41895746">YCombinator</a></li><li><a href="https://hai.stanford.edu/news/ai-index-state-ai-13-charts">State of AI in 13 Charts</a></li><li>Fortune article - <a href="https://finance.yahoo.com/news/openai-sees-5-billion-loss-170306927.html">OpenAI sees $5 billion loss in 2024 and soaring sales as big ChatGPT fee hikes planned, report says</a></li></ul><p><strong>More on AI Hype (Dying)</strong></p><ul><li>Latent Space article - <a href="https://www.latent.space/p/mar-jun-2024">The Winds of AI Winter</a></li><li>Article by Gary Marcus - <a href="https://garymarcus.substack.com/p/the-great-ai-retrenchment-has-begun">The Great AI Retrenchment has Begun</a></li><li>TimmermanReport article - <a href="https://timmermanreport.com/2024/07/ai-if-not-now-when-no-really-when/">AI: If Not Now, When? No, Really - When?</a></li><li>MIT News article - <a href="https://news.mit.edu/2023/who-will-benefit-ai-machine-usefulness-0929">Who Will Benefit from AI?</a></li><li>Washington Post article - <a href="https://www.washingtonpost.com/technology/2024/04/18/ai-bubble-hype-dying-money/">The AI Hype bubble is deflating. Now comes the hard part.</a></li><li>Andreesen Horowitz article - <a href="https://a16z.com/ai-will-save-the-world/">Why AI Will Save the World</a></li></ul><p><strong>Other Sources</strong></p><ul><li>Human-Centered Artificial Intelligence <a href="https://hai.stanford.edu/news/introducing-foundation-model-transparency-index">Foundation Model Transparency Index</a></li><li>Cointelegraph article - <a href="https://cointelegraph.com/news/europe-gathers-global-experts-to-draft-code-of-practice-for-gen-ai">Europe gathers global experts to draft ‘Code of Practice’ for AI</a></li><li>Reuters article - <a href="https://www.reuters.com/technology/microsofts-vp-genai-research-join-openai-2024-10-14/">Microsoft's VP of GenAI research to join OpenAI</a></li><li>Twitter post from <a href="https://x.com/_tim_brooks/status/1841982327431561528">Tim Brooks</a> on joining DeepMind</li><li>Edward Zitron article - <a href="https://www.wheresyoured.at/the-men-who-killed-google/">The Man Who Killed Google Search</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/fcc8770a/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Open Source AI and 2024 Nobel Prizes</title>
      <itunes:episode>3</itunes:episode>
      <podcast:episode>3</podcast:episode>
      <itunes:title>Open Source AI and 2024 Nobel Prizes</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a75b2dd2-82b3-4082-ac4f-f04b3f1f5262</guid>
      <link>https://kairos.fm/muckraikers/e003</link>
      <description>
        <![CDATA[<p>The Open Source AI Definition is out after years of drafting, will it reestablish brand meaning for the “Open Source” term? Also, the 2024 Nobel Prizes in Physics and Chemistry are heavily tied to AI; we scrutinize not only this year's prizes, but also Nobel Prizes as a concept.</p><p> </p><ul><li>(00:00) - Intro</li>
<li>(00:30) - Hot off the press</li>
<li>(03:45) - Open Source AI background</li>
<li>(10:30) - Definitions and changes in RC1</li>
<li>(18:36) - “Business source”</li>
<li>(22:17) - Parallels with legislation</li>
<li>(26:22) - Impacts of the OSAID</li>
<li>(33:58) - 2024 Nobel Prize Context</li>
<li>(37:21) - Chemistry prize</li>
<li>(45:06) - Physics prize</li>
<li>(50:29) - Takeaways</li>
<li>(52:03) - What’s the real muck?</li>
<li>(01:00:27) - Outro</li>
</ul><br><p><strong>Links</strong></p><ul><li><a href="https://opensource.org/deepdive/drafts/the-open-source-ai-definition-1-0-rc1">Open Source AI Definition, Release Candidate 1</a></li><li><a href="https://opensource.org/blog/the-open-source-ai-definition-v-1-0-rc1-is-available-for-comments">OSAID RC1 announcement</a></li><li><a href="https://www.nobelprize.org/all-nobel-prizes-2024/">All Nobel Prizes 2024</a></li></ul><p><strong>More Reading on Open Source AI</strong></p><ul><li>Kairos.FM article - <a href="https://kairos.fm/posts/blogposts/open-source-ai-is-a-lie/">Open Source AI is a lie, but it doesn't have to be</a></li><li>The Register article - <a href="https://www.theregister.com/2024/09/14/opinion_column_osi/">The open source AI civil war approaches</a></li><li>MIT Technology Review article - <a href="https://www.technologyreview.com/2024/08/22/1097224/we-finally-have-a-definition-for-open-source-ai/">We finally have a definition for open-source AI</a></li></ul><p><strong>On Nobel Prizes</strong></p><ul><li>Paper - <a href="https://paulnovosad.com/pdf/nobel-prizes.pdf">Access to Opportunity in the Sciences: Evidence from the Nobel Laureates</a></li><li>Physics prize - <a href="https://www.nobelprize.org/uploads/2024/09/advanced-physicsprize2024.pdf">scientific background</a>, <a href="https://www.nobelprize.org/prizes/physics/2024/popular-information/">popular info</a></li><li>Chemistry prize - <a href="https://www.nobelprize.org/uploads/2024/10/advanced-chemistryprize2024.pdf">scientific background</a>, <a href="https://www.nobelprize.org/prizes/chemistry/2024/popular-information/">popular info</a></li><li>Reuters article - <a href="https://www.reuters.com/technology/artificial-intelligence/googles-nobel-prize-winners-stir-debate-over-ai-research-2024-10-10/">Google's Nobel prize winners stir debate over AI research</a></li><li>Wikipedia article - <a href="https://en.wikipedia.org/wiki/Nobel_disease">Nobel disease</a></li></ul><p><strong>Other Sources</strong></p><ul><li>Pivot.ai article - <a href="https://pivot-to-ai.com/2024/09/30/people-are-blatantly-stealing-my-work-ai-artist-complains/">People are ‘blatantly stealing my work,’ AI artist complains</a></li><li>Paper - <a href="https://arxiv.org/abs/2410.05229">GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models</a></li><li>Paper - <a href="https://link.springer.com/article/10.1007/s42113-024-00217-5">Reclaiming AI as a Theoretical Tool for Cognitive Science | Computational Brain &amp; Behavior</a></li></ul><p> </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>The Open Source AI Definition is out after years of drafting, will it reestablish brand meaning for the “Open Source” term? Also, the 2024 Nobel Prizes in Physics and Chemistry are heavily tied to AI; we scrutinize not only this year's prizes, but also Nobel Prizes as a concept.</p><p> </p><ul><li>(00:00) - Intro</li>
<li>(00:30) - Hot off the press</li>
<li>(03:45) - Open Source AI background</li>
<li>(10:30) - Definitions and changes in RC1</li>
<li>(18:36) - “Business source”</li>
<li>(22:17) - Parallels with legislation</li>
<li>(26:22) - Impacts of the OSAID</li>
<li>(33:58) - 2024 Nobel Prize Context</li>
<li>(37:21) - Chemistry prize</li>
<li>(45:06) - Physics prize</li>
<li>(50:29) - Takeaways</li>
<li>(52:03) - What’s the real muck?</li>
<li>(01:00:27) - Outro</li>
</ul><br><p><strong>Links</strong></p><ul><li><a href="https://opensource.org/deepdive/drafts/the-open-source-ai-definition-1-0-rc1">Open Source AI Definition, Release Candidate 1</a></li><li><a href="https://opensource.org/blog/the-open-source-ai-definition-v-1-0-rc1-is-available-for-comments">OSAID RC1 announcement</a></li><li><a href="https://www.nobelprize.org/all-nobel-prizes-2024/">All Nobel Prizes 2024</a></li></ul><p><strong>More Reading on Open Source AI</strong></p><ul><li>Kairos.FM article - <a href="https://kairos.fm/posts/blogposts/open-source-ai-is-a-lie/">Open Source AI is a lie, but it doesn't have to be</a></li><li>The Register article - <a href="https://www.theregister.com/2024/09/14/opinion_column_osi/">The open source AI civil war approaches</a></li><li>MIT Technology Review article - <a href="https://www.technologyreview.com/2024/08/22/1097224/we-finally-have-a-definition-for-open-source-ai/">We finally have a definition for open-source AI</a></li></ul><p><strong>On Nobel Prizes</strong></p><ul><li>Paper - <a href="https://paulnovosad.com/pdf/nobel-prizes.pdf">Access to Opportunity in the Sciences: Evidence from the Nobel Laureates</a></li><li>Physics prize - <a href="https://www.nobelprize.org/uploads/2024/09/advanced-physicsprize2024.pdf">scientific background</a>, <a href="https://www.nobelprize.org/prizes/physics/2024/popular-information/">popular info</a></li><li>Chemistry prize - <a href="https://www.nobelprize.org/uploads/2024/10/advanced-chemistryprize2024.pdf">scientific background</a>, <a href="https://www.nobelprize.org/prizes/chemistry/2024/popular-information/">popular info</a></li><li>Reuters article - <a href="https://www.reuters.com/technology/artificial-intelligence/googles-nobel-prize-winners-stir-debate-over-ai-research-2024-10-10/">Google's Nobel prize winners stir debate over AI research</a></li><li>Wikipedia article - <a href="https://en.wikipedia.org/wiki/Nobel_disease">Nobel disease</a></li></ul><p><strong>Other Sources</strong></p><ul><li>Pivot.ai article - <a href="https://pivot-to-ai.com/2024/09/30/people-are-blatantly-stealing-my-work-ai-artist-complains/">People are ‘blatantly stealing my work,’ AI artist complains</a></li><li>Paper - <a href="https://arxiv.org/abs/2410.05229">GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models</a></li><li>Paper - <a href="https://link.springer.com/article/10.1007/s42113-024-00217-5">Reclaiming AI as a Theoretical Tool for Cognitive Science | Computational Brain &amp; Behavior</a></li></ul><p> </p>]]>
      </content:encoded>
      <pubDate>Wed, 16 Oct 2024 09:00:00 -0600</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/c3c9a898/4e065854.mp3" length="43748756" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>3662</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>The Open Source AI Definition is out after years of drafting, will it reestablish brand meaning for the “Open Source” term? Also, the 2024 Nobel Prizes in Physics and Chemistry are heavily tied to AI; we scrutinize not only this year's prizes, but also Nobel Prizes as a concept.</p><p> </p><ul><li>(00:00) - Intro</li>
<li>(00:30) - Hot off the press</li>
<li>(03:45) - Open Source AI background</li>
<li>(10:30) - Definitions and changes in RC1</li>
<li>(18:36) - “Business source”</li>
<li>(22:17) - Parallels with legislation</li>
<li>(26:22) - Impacts of the OSAID</li>
<li>(33:58) - 2024 Nobel Prize Context</li>
<li>(37:21) - Chemistry prize</li>
<li>(45:06) - Physics prize</li>
<li>(50:29) - Takeaways</li>
<li>(52:03) - What’s the real muck?</li>
<li>(01:00:27) - Outro</li>
</ul><br><p><strong>Links</strong></p><ul><li><a href="https://opensource.org/deepdive/drafts/the-open-source-ai-definition-1-0-rc1">Open Source AI Definition, Release Candidate 1</a></li><li><a href="https://opensource.org/blog/the-open-source-ai-definition-v-1-0-rc1-is-available-for-comments">OSAID RC1 announcement</a></li><li><a href="https://www.nobelprize.org/all-nobel-prizes-2024/">All Nobel Prizes 2024</a></li></ul><p><strong>More Reading on Open Source AI</strong></p><ul><li>Kairos.FM article - <a href="https://kairos.fm/posts/blogposts/open-source-ai-is-a-lie/">Open Source AI is a lie, but it doesn't have to be</a></li><li>The Register article - <a href="https://www.theregister.com/2024/09/14/opinion_column_osi/">The open source AI civil war approaches</a></li><li>MIT Technology Review article - <a href="https://www.technologyreview.com/2024/08/22/1097224/we-finally-have-a-definition-for-open-source-ai/">We finally have a definition for open-source AI</a></li></ul><p><strong>On Nobel Prizes</strong></p><ul><li>Paper - <a href="https://paulnovosad.com/pdf/nobel-prizes.pdf">Access to Opportunity in the Sciences: Evidence from the Nobel Laureates</a></li><li>Physics prize - <a href="https://www.nobelprize.org/uploads/2024/09/advanced-physicsprize2024.pdf">scientific background</a>, <a href="https://www.nobelprize.org/prizes/physics/2024/popular-information/">popular info</a></li><li>Chemistry prize - <a href="https://www.nobelprize.org/uploads/2024/10/advanced-chemistryprize2024.pdf">scientific background</a>, <a href="https://www.nobelprize.org/prizes/chemistry/2024/popular-information/">popular info</a></li><li>Reuters article - <a href="https://www.reuters.com/technology/artificial-intelligence/googles-nobel-prize-winners-stir-debate-over-ai-research-2024-10-10/">Google's Nobel prize winners stir debate over AI research</a></li><li>Wikipedia article - <a href="https://en.wikipedia.org/wiki/Nobel_disease">Nobel disease</a></li></ul><p><strong>Other Sources</strong></p><ul><li>Pivot.ai article - <a href="https://pivot-to-ai.com/2024/09/30/people-are-blatantly-stealing-my-work-ai-artist-complains/">People are ‘blatantly stealing my work,’ AI artist complains</a></li><li>Paper - <a href="https://arxiv.org/abs/2410.05229">GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models</a></li><li>Paper - <a href="https://link.springer.com/article/10.1007/s42113-024-00217-5">Reclaiming AI as a Theoretical Tool for Cognitive Science | Computational Brain &amp; Behavior</a></li></ul><p> </p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, Open Source, Nobel, Prizes, OSI</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/c3c9a898/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>SB1047</title>
      <itunes:episode>2</itunes:episode>
      <podcast:episode>2</podcast:episode>
      <itunes:title>SB1047</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">49f48d4e-0743-465a-ba22-b8ee71750777</guid>
      <link>https://kairos.fm/muckraikers/e002</link>
      <description>
        <![CDATA[<p>Why is Mark Ruffalo talking about SB1047, and what is it anyway? Tune in for our thoughts on the <em>now vetoed</em> California legislation that had Big Tech scared.</p><p></p><ul><li>(00:00) - Intro</li>
<li>(00:31) - Updates from a relatively slow week</li>
<li>(03:32) - Disclaimer: SB1047 vetoed during recording (still worth a listen)</li>
<li>(05:24) - What is SB1047</li>
<li>(12:30) - Definitions</li>
<li>(17:18) - Understanding the bill</li>
<li>(28:42) - What are the players saying about it? </li>
<li>(46:44) - Addressing critiques</li>
<li>(55:59) - Open Source</li>
<li>(01:02:36) - Takeaways</li>
<li>(01:15:40) - Clarification on impact to big tech</li>
<li>(01:18:51) - Outro</li>
</ul><br><strong>Links</strong><ul><li><a href="https://legiscan.com/CA/text/SB1047/2023">SB1047 legislation page</a></li><li><a href="https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047">SB1047 CalMatters page</a></li><li><a href="https://calmatters.org/economy/2024/09/california-artificial-intelligence-bill-veto/">Newsom vetoes SB1047</a></li><li><a href="https://newsletter.safe.ai/p/aisn-40-california-ai-legislation">CAIS newsletter on SB1047</a></li><li><a href="https://safesecureai.org/experts">Prominent AI nerd letter</a></li><li><a href="https://cdn.sanity.io/files/4zrzovbb/website/6a3b14a98a781a6b69b9a3c5b65da26a44ecddc6.pdf">Anthropic's letter</a></li><li><a href="https://thezvi.substack.com/p/guide-to-sb-1047">SB1047 ~explainer</a></li></ul><p><br></p><p><strong>Additional SB1047 Related Coverage</strong></p><ul><li><a href="https://techcrunch.com/2024/08/21/openais-opposition-to-californias-ai-law-makes-no-sense-says-state-senator/">Opposition to SB1047 'makes no sense'</a></li><li><a href="https://techcrunch.com/2024/09/17/governor-newsom-on-california-ai-bill-sb-1047-i-cant-solve-for-everything/">Newsom on SB1047</a></li><li><a href="https://a16z.com/sb-1047-what-you-need-to-know-with-anjney-midha/">Andreesen Horowitz on SB1047</a></li><li><a href="https://x.com/DanHendrycks/status/1816523907777888563">Classy move by Dan</a></li><li><a href="https://www.windowscentral.com/software-apps/ex-openai-researchers-claim-sam-altmans-public-support-for-ai-regulation-is-a-facade-when-actual-regulation-is-on-the-table-he-opposes-it">Ex-OpenAI employee says Altman doesn't want regulation</a></li></ul><p><br></p><p><strong>Other Sources</strong></p><ul><li><a href="https://x.com/shishirpatil_/status/1837205152132153803">o1 doesn't measure up in new benchmark paper</a></li><li><a href="https://www.cnbc.com/2024/09/27/openai-sees-5-billion-loss-this-year-on-3point7-billion-in-revenue.html?__source=threads%7Cmain">OpenAI losses and gains</a></li><li><a href="https://www.engadget.com/social-media/openais-x-account-was-hacked-to-promote-a-crypto-scam-130020696.html?guccounter=1">OpenAI crypto hack</a></li><li><a href="https://x.com/miramurati/status/1839025700009030027">"Murati out" -Mira Murati, probably</a></li><li><a href="https://www.bloomberg.com/news/articles/2024-09-24/openai-pitched-white-house-on-unprecedented-data-center-buildout">Altman pitching datacenters to White House</a></li><li><a href="https://www.tomshardware.com/tech-industry/tsmc-execs-allegedly-dismissed-openai-ceo-sam-altman-as-podcasting-bro">Sam Altman, 'podcast bro'</a></li><li><a href="https://arxiv.org/abs/2311.02537">Paper: Contract Design with Safety Inspections</a></li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Why is Mark Ruffalo talking about SB1047, and what is it anyway? Tune in for our thoughts on the <em>now vetoed</em> California legislation that had Big Tech scared.</p><p></p><ul><li>(00:00) - Intro</li>
<li>(00:31) - Updates from a relatively slow week</li>
<li>(03:32) - Disclaimer: SB1047 vetoed during recording (still worth a listen)</li>
<li>(05:24) - What is SB1047</li>
<li>(12:30) - Definitions</li>
<li>(17:18) - Understanding the bill</li>
<li>(28:42) - What are the players saying about it? </li>
<li>(46:44) - Addressing critiques</li>
<li>(55:59) - Open Source</li>
<li>(01:02:36) - Takeaways</li>
<li>(01:15:40) - Clarification on impact to big tech</li>
<li>(01:18:51) - Outro</li>
</ul><br><strong>Links</strong><ul><li><a href="https://legiscan.com/CA/text/SB1047/2023">SB1047 legislation page</a></li><li><a href="https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047">SB1047 CalMatters page</a></li><li><a href="https://calmatters.org/economy/2024/09/california-artificial-intelligence-bill-veto/">Newsom vetoes SB1047</a></li><li><a href="https://newsletter.safe.ai/p/aisn-40-california-ai-legislation">CAIS newsletter on SB1047</a></li><li><a href="https://safesecureai.org/experts">Prominent AI nerd letter</a></li><li><a href="https://cdn.sanity.io/files/4zrzovbb/website/6a3b14a98a781a6b69b9a3c5b65da26a44ecddc6.pdf">Anthropic's letter</a></li><li><a href="https://thezvi.substack.com/p/guide-to-sb-1047">SB1047 ~explainer</a></li></ul><p><br></p><p><strong>Additional SB1047 Related Coverage</strong></p><ul><li><a href="https://techcrunch.com/2024/08/21/openais-opposition-to-californias-ai-law-makes-no-sense-says-state-senator/">Opposition to SB1047 'makes no sense'</a></li><li><a href="https://techcrunch.com/2024/09/17/governor-newsom-on-california-ai-bill-sb-1047-i-cant-solve-for-everything/">Newsom on SB1047</a></li><li><a href="https://a16z.com/sb-1047-what-you-need-to-know-with-anjney-midha/">Andreesen Horowitz on SB1047</a></li><li><a href="https://x.com/DanHendrycks/status/1816523907777888563">Classy move by Dan</a></li><li><a href="https://www.windowscentral.com/software-apps/ex-openai-researchers-claim-sam-altmans-public-support-for-ai-regulation-is-a-facade-when-actual-regulation-is-on-the-table-he-opposes-it">Ex-OpenAI employee says Altman doesn't want regulation</a></li></ul><p><br></p><p><strong>Other Sources</strong></p><ul><li><a href="https://x.com/shishirpatil_/status/1837205152132153803">o1 doesn't measure up in new benchmark paper</a></li><li><a href="https://www.cnbc.com/2024/09/27/openai-sees-5-billion-loss-this-year-on-3point7-billion-in-revenue.html?__source=threads%7Cmain">OpenAI losses and gains</a></li><li><a href="https://www.engadget.com/social-media/openais-x-account-was-hacked-to-promote-a-crypto-scam-130020696.html?guccounter=1">OpenAI crypto hack</a></li><li><a href="https://x.com/miramurati/status/1839025700009030027">"Murati out" -Mira Murati, probably</a></li><li><a href="https://www.bloomberg.com/news/articles/2024-09-24/openai-pitched-white-house-on-unprecedented-data-center-buildout">Altman pitching datacenters to White House</a></li><li><a href="https://www.tomshardware.com/tech-industry/tsmc-execs-allegedly-dismissed-openai-ceo-sam-altman-as-podcasting-bro">Sam Altman, 'podcast bro'</a></li><li><a href="https://arxiv.org/abs/2311.02537">Paper: Contract Design with Safety Inspections</a></li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Mon, 30 Sep 2024 09:00:00 -0600</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/21d6cbf9/65ae2ac7.mp3" length="58272280" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>4760</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Why is Mark Ruffalo talking about SB1047, and what is it anyway? Tune in for our thoughts on the <em>now vetoed</em> California legislation that had Big Tech scared.</p><p></p><ul><li>(00:00) - Intro</li>
<li>(00:31) - Updates from a relatively slow week</li>
<li>(03:32) - Disclaimer: SB1047 vetoed during recording (still worth a listen)</li>
<li>(05:24) - What is SB1047</li>
<li>(12:30) - Definitions</li>
<li>(17:18) - Understanding the bill</li>
<li>(28:42) - What are the players saying about it? </li>
<li>(46:44) - Addressing critiques</li>
<li>(55:59) - Open Source</li>
<li>(01:02:36) - Takeaways</li>
<li>(01:15:40) - Clarification on impact to big tech</li>
<li>(01:18:51) - Outro</li>
</ul><br><strong>Links</strong><ul><li><a href="https://legiscan.com/CA/text/SB1047/2023">SB1047 legislation page</a></li><li><a href="https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047">SB1047 CalMatters page</a></li><li><a href="https://calmatters.org/economy/2024/09/california-artificial-intelligence-bill-veto/">Newsom vetoes SB1047</a></li><li><a href="https://newsletter.safe.ai/p/aisn-40-california-ai-legislation">CAIS newsletter on SB1047</a></li><li><a href="https://safesecureai.org/experts">Prominent AI nerd letter</a></li><li><a href="https://cdn.sanity.io/files/4zrzovbb/website/6a3b14a98a781a6b69b9a3c5b65da26a44ecddc6.pdf">Anthropic's letter</a></li><li><a href="https://thezvi.substack.com/p/guide-to-sb-1047">SB1047 ~explainer</a></li></ul><p><br></p><p><strong>Additional SB1047 Related Coverage</strong></p><ul><li><a href="https://techcrunch.com/2024/08/21/openais-opposition-to-californias-ai-law-makes-no-sense-says-state-senator/">Opposition to SB1047 'makes no sense'</a></li><li><a href="https://techcrunch.com/2024/09/17/governor-newsom-on-california-ai-bill-sb-1047-i-cant-solve-for-everything/">Newsom on SB1047</a></li><li><a href="https://a16z.com/sb-1047-what-you-need-to-know-with-anjney-midha/">Andreesen Horowitz on SB1047</a></li><li><a href="https://x.com/DanHendrycks/status/1816523907777888563">Classy move by Dan</a></li><li><a href="https://www.windowscentral.com/software-apps/ex-openai-researchers-claim-sam-altmans-public-support-for-ai-regulation-is-a-facade-when-actual-regulation-is-on-the-table-he-opposes-it">Ex-OpenAI employee says Altman doesn't want regulation</a></li></ul><p><br></p><p><strong>Other Sources</strong></p><ul><li><a href="https://x.com/shishirpatil_/status/1837205152132153803">o1 doesn't measure up in new benchmark paper</a></li><li><a href="https://www.cnbc.com/2024/09/27/openai-sees-5-billion-loss-this-year-on-3point7-billion-in-revenue.html?__source=threads%7Cmain">OpenAI losses and gains</a></li><li><a href="https://www.engadget.com/social-media/openais-x-account-was-hacked-to-promote-a-crypto-scam-130020696.html?guccounter=1">OpenAI crypto hack</a></li><li><a href="https://x.com/miramurati/status/1839025700009030027">"Murati out" -Mira Murati, probably</a></li><li><a href="https://www.bloomberg.com/news/articles/2024-09-24/openai-pitched-white-house-on-unprecedented-data-center-buildout">Altman pitching datacenters to White House</a></li><li><a href="https://www.tomshardware.com/tech-industry/tsmc-execs-allegedly-dismissed-openai-ceo-sam-altman-as-podcasting-bro">Sam Altman, 'podcast bro'</a></li><li><a href="https://arxiv.org/abs/2311.02537">Paper: Contract Design with Safety Inspections</a></li></ul><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, California, SB1047, legislation</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/21d6cbf9/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>OpenAI's o1, aka. Strawberry</title>
      <itunes:episode>1</itunes:episode>
      <podcast:episode>1</podcast:episode>
      <itunes:title>OpenAI's o1, aka. Strawberry</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1a8344d1-3292-4ece-b80c-135ceb48eeba</guid>
      <link>https://kairos.fm/muckraikers/e001</link>
      <description>
        <![CDATA[<p>OpenAI's new model is out, and we are going to have to rake through a lot of muck to get the value out of this one!</p><p>⚠ Opt out of LinkedIn's GenAI scraping ➡️ <a href="https://lnkd.in/epziUeTi">https://lnkd.in/epziUeTi</a> </p><p></p><ul><li>(00:00) - Intro</li>
<li>(00:25) - Other recent news</li>
<li>(02:57) - Hot off the press</li>
<li>(03:58) - Why might someone care?</li>
<li>(04:52) - What is it?</li>
<li>(06:49) - How is it being sold?</li>
<li>(10:45) - How do they explain it, technically?</li>
<li>(27:09) - Reflection AI Drama</li>
<li>(40:19) - Why do we care?</li>
<li>(46:39) - Scraping away the muck</li>
</ul><br><em>Note: at around 32 minutes, Igor says the incorrect Llama model version for the story he is telling. Jacob dubbed over those mistakes with the correct versioning.</em><p><strong>Links relating to o1</strong></p><ul><li><a href="https://openai.com/index/learning-to-reason-with-llms/">OpenAI blogpost</a></li><li><a href="https://openai.com/index/openai-o1-system-card/">System card webpage</a></li><li><a href="https://github.com/hijkzzz/Awesome-LLM-Strawberry">GitHub collection of o1 related media</a></li><li><a href="https://x.com/btibor91/status/1834686946846597281">AMA Twitter thread</a></li><li><a href="https://x.com/fchollet/status/1835452149851148727">Francois Chollet Tweet on reasoning and o1</a></li><li><a href="https://arxiv.org/abs/2409.12917">The academic paper doing something very similar to o1</a></li></ul><p><strong>Other stuff we mention</strong></p><ul><li><a href="https://www.reuters.com/technology/artificial-intelligence/openais-stunning-150-billion-valuation-hinges-upending-corporate-structure-2024-09-14/">OpenAI's huge valuation hinges on upending corporate structure</a></li><li><a href="https://techcrunch.com/video/techcrunch-minute-meta-acknowledges-its-scraping-all-public-posts-for-ai-training/?guccounter=1">Meta acknowledges it’s scraping all public posts for AI training</a></li><li><a href="https://www.whitehouse.gov/ostp/news-updates/2024/09/12/white-house-announces-new-private-sector-voluntary-commitments-to-combat-image-based-sexual-abuse/">White House announces new private sector voluntary commitments to combat image-based sexual abuse</a></li><li><a href="https://www.benzinga.com/news/24/09/40846309/sam-altman-wants-chatgpt-users-to-have-gratitude-for-magic-intelligence-in-the-sky-after-they-ask-ab">Sam Altman wants you to be grateful</a></li><li><a href="https://techcrunch.com/2024/09/11/mark-zuckerberg-says-hes-done-apologizing/">The Zuck is done apologizing</a></li><li><a href="https://static1.squarespace.com/static/64edf8e7f2b10d716b5ba0e1/t/66e32323b3f779378d9e30c9/1726161701034/Mapping+technical+safety+research.pdf">IAPS report on technical safety research at AI companies</a></li><li><a href="https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper">Llama2 70B is "about as good" as GPT-4 at summarization tasks</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>OpenAI's new model is out, and we are going to have to rake through a lot of muck to get the value out of this one!</p><p>⚠ Opt out of LinkedIn's GenAI scraping ➡️ <a href="https://lnkd.in/epziUeTi">https://lnkd.in/epziUeTi</a> </p><p></p><ul><li>(00:00) - Intro</li>
<li>(00:25) - Other recent news</li>
<li>(02:57) - Hot off the press</li>
<li>(03:58) - Why might someone care?</li>
<li>(04:52) - What is it?</li>
<li>(06:49) - How is it being sold?</li>
<li>(10:45) - How do they explain it, technically?</li>
<li>(27:09) - Reflection AI Drama</li>
<li>(40:19) - Why do we care?</li>
<li>(46:39) - Scraping away the muck</li>
</ul><br><em>Note: at around 32 minutes, Igor says the incorrect Llama model version for the story he is telling. Jacob dubbed over those mistakes with the correct versioning.</em><p><strong>Links relating to o1</strong></p><ul><li><a href="https://openai.com/index/learning-to-reason-with-llms/">OpenAI blogpost</a></li><li><a href="https://openai.com/index/openai-o1-system-card/">System card webpage</a></li><li><a href="https://github.com/hijkzzz/Awesome-LLM-Strawberry">GitHub collection of o1 related media</a></li><li><a href="https://x.com/btibor91/status/1834686946846597281">AMA Twitter thread</a></li><li><a href="https://x.com/fchollet/status/1835452149851148727">Francois Chollet Tweet on reasoning and o1</a></li><li><a href="https://arxiv.org/abs/2409.12917">The academic paper doing something very similar to o1</a></li></ul><p><strong>Other stuff we mention</strong></p><ul><li><a href="https://www.reuters.com/technology/artificial-intelligence/openais-stunning-150-billion-valuation-hinges-upending-corporate-structure-2024-09-14/">OpenAI's huge valuation hinges on upending corporate structure</a></li><li><a href="https://techcrunch.com/video/techcrunch-minute-meta-acknowledges-its-scraping-all-public-posts-for-ai-training/?guccounter=1">Meta acknowledges it’s scraping all public posts for AI training</a></li><li><a href="https://www.whitehouse.gov/ostp/news-updates/2024/09/12/white-house-announces-new-private-sector-voluntary-commitments-to-combat-image-based-sexual-abuse/">White House announces new private sector voluntary commitments to combat image-based sexual abuse</a></li><li><a href="https://www.benzinga.com/news/24/09/40846309/sam-altman-wants-chatgpt-users-to-have-gratitude-for-magic-intelligence-in-the-sky-after-they-ask-ab">Sam Altman wants you to be grateful</a></li><li><a href="https://techcrunch.com/2024/09/11/mark-zuckerberg-says-hes-done-apologizing/">The Zuck is done apologizing</a></li><li><a href="https://static1.squarespace.com/static/64edf8e7f2b10d716b5ba0e1/t/66e32323b3f779378d9e30c9/1726161701034/Mapping+technical+safety+research.pdf">IAPS report on technical safety research at AI companies</a></li><li><a href="https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper">Llama2 70B is "about as good" as GPT-4 at summarization tasks</a></li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 23 Sep 2024 09:00:00 -0600</pubDate>
      <author>Jacob Haimes and Igor Krawczuk</author>
      <enclosure url="https://op3.dev/e/media.transistor.fm/d60e178a/e5f532b7.mp3" length="36533873" type="audio/mpeg"/>
      <itunes:author>Jacob Haimes and Igor Krawczuk</itunes:author>
      <itunes:duration>3003</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>OpenAI's new model is out, and we are going to have to rake through a lot of muck to get the value out of this one!</p><p>⚠ Opt out of LinkedIn's GenAI scraping ➡️ <a href="https://lnkd.in/epziUeTi">https://lnkd.in/epziUeTi</a> </p><p></p><ul><li>(00:00) - Intro</li>
<li>(00:25) - Other recent news</li>
<li>(02:57) - Hot off the press</li>
<li>(03:58) - Why might someone care?</li>
<li>(04:52) - What is it?</li>
<li>(06:49) - How is it being sold?</li>
<li>(10:45) - How do they explain it, technically?</li>
<li>(27:09) - Reflection AI Drama</li>
<li>(40:19) - Why do we care?</li>
<li>(46:39) - Scraping away the muck</li>
</ul><br><em>Note: at around 32 minutes, Igor says the incorrect Llama model version for the story he is telling. Jacob dubbed over those mistakes with the correct versioning.</em><p><strong>Links relating to o1</strong></p><ul><li><a href="https://openai.com/index/learning-to-reason-with-llms/">OpenAI blogpost</a></li><li><a href="https://openai.com/index/openai-o1-system-card/">System card webpage</a></li><li><a href="https://github.com/hijkzzz/Awesome-LLM-Strawberry">GitHub collection of o1 related media</a></li><li><a href="https://x.com/btibor91/status/1834686946846597281">AMA Twitter thread</a></li><li><a href="https://x.com/fchollet/status/1835452149851148727">Francois Chollet Tweet on reasoning and o1</a></li><li><a href="https://arxiv.org/abs/2409.12917">The academic paper doing something very similar to o1</a></li></ul><p><strong>Other stuff we mention</strong></p><ul><li><a href="https://www.reuters.com/technology/artificial-intelligence/openais-stunning-150-billion-valuation-hinges-upending-corporate-structure-2024-09-14/">OpenAI's huge valuation hinges on upending corporate structure</a></li><li><a href="https://techcrunch.com/video/techcrunch-minute-meta-acknowledges-its-scraping-all-public-posts-for-ai-training/?guccounter=1">Meta acknowledges it’s scraping all public posts for AI training</a></li><li><a href="https://www.whitehouse.gov/ostp/news-updates/2024/09/12/white-house-announces-new-private-sector-voluntary-commitments-to-combat-image-based-sexual-abuse/">White House announces new private sector voluntary commitments to combat image-based sexual abuse</a></li><li><a href="https://www.benzinga.com/news/24/09/40846309/sam-altman-wants-chatgpt-users-to-have-gratitude-for-magic-intelligence-in-the-sky-after-they-ask-ab">Sam Altman wants you to be grateful</a></li><li><a href="https://techcrunch.com/2024/09/11/mark-zuckerberg-says-hes-done-apologizing/">The Zuck is done apologizing</a></li><li><a href="https://static1.squarespace.com/static/64edf8e7f2b10d716b5ba0e1/t/66e32323b3f779378d9e30c9/1726161701034/Mapping+technical+safety+research.pdf">IAPS report on technical safety research at AI companies</a></li><li><a href="https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper">Llama2 70B is "about as good" as GPT-4 at summarization tasks</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, artificial intelligence, OpenAI, o1</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:person role="Host" href="https://krawczuk.eu" img="https://img.transistorcdn.com/28Y0jOuxZfj9Mxy7Ap0s5-4FRY6iBNdW0gmiQyl5SAQ/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82M2I1/YzVjMWFmNTlhYzA2/YTgyZGI0YTM3MWY5/Zjc5Mi5qcGVn.jpg">Igor Krawczuk</podcast:person>
      <podcast:person role="Host" href="https://jacob-haimes.github.io" img="https://img.transistorcdn.com/aB1ho3T9pmqLJiyQvdThtfo_OIB680463w__i8MoaRA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMDJl/MzFhZDZkYzc4Yjc0/NTQwNzA4OGRlMGU3/NzU3Ni5wbmc.jpg">Jacob Haimes</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/d60e178a/chapters.json" type="application/json+chapters"/>
    </item>
  </channel>
</rss>
